00:00:00.001 Started by upstream project "autotest-per-patch" build number 126259 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.512 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.525 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.537 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.537 > git config core.sparsecheckout # timeout=10 00:00:06.548 > git read-tree -mu HEAD # timeout=10 00:00:06.564 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.586 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.586 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.692 [Pipeline] Start of Pipeline 00:00:06.705 [Pipeline] library 00:00:06.706 Loading library shm_lib@master 00:00:06.706 Library shm_lib@master is cached. Copying from home. 00:00:06.721 [Pipeline] node 00:00:06.728 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.729 [Pipeline] { 00:00:06.737 [Pipeline] catchError 00:00:06.739 [Pipeline] { 00:00:06.750 [Pipeline] wrap 00:00:06.758 [Pipeline] { 00:00:06.764 [Pipeline] stage 00:00:06.766 [Pipeline] { (Prologue) 00:00:06.781 [Pipeline] echo 00:00:06.782 Node: VM-host-SM17 00:00:06.786 [Pipeline] cleanWs 00:00:06.793 [WS-CLEANUP] Deleting project workspace... 00:00:06.793 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.798 [WS-CLEANUP] done 00:00:07.081 [Pipeline] setCustomBuildProperty 00:00:07.150 [Pipeline] httpRequest 00:00:07.165 [Pipeline] echo 00:00:07.166 Sorcerer 10.211.164.101 is alive 00:00:07.173 [Pipeline] httpRequest 00:00:07.177 HttpMethod: GET 00:00:07.177 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.178 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.178 Response Code: HTTP/1.1 200 OK 00:00:07.178 Success: Status code 200 is in the accepted range: 200,404 00:00:07.179 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.444 [Pipeline] sh 00:00:08.722 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.739 [Pipeline] httpRequest 00:00:08.760 [Pipeline] echo 00:00:08.761 Sorcerer 10.211.164.101 is alive 00:00:08.768 [Pipeline] httpRequest 00:00:08.772 HttpMethod: GET 00:00:08.773 URL: http://10.211.164.101/packages/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:00:08.773 Sending request to url: http://10.211.164.101/packages/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:00:08.802 Response Code: HTTP/1.1 200 OK 00:00:08.802 Success: Status code 200 is in the accepted range: 200,404 00:00:08.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:01:01.098 [Pipeline] sh 00:01:01.426 + tar --no-same-owner -xf spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:01:03.966 [Pipeline] sh 00:01:04.240 + git -C spdk log --oneline -n5 00:01:04.240 e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:04.240 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:01:04.240 47ca8c1aa nvme: populate socket_id for rdma controllers 00:01:04.240 c1860effd nvme: populate socket_id for tcp controllers 00:01:04.240 91f51bb85 nvme: populate socket_id for pcie controllers 00:01:04.259 [Pipeline] writeFile 00:01:04.273 [Pipeline] sh 00:01:04.554 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.569 [Pipeline] sh 00:01:04.849 + cat autorun-spdk.conf 00:01:04.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.849 SPDK_TEST_NVMF=1 00:01:04.849 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.849 SPDK_TEST_URING=1 00:01:04.849 SPDK_TEST_USDT=1 00:01:04.849 SPDK_RUN_UBSAN=1 00:01:04.849 NET_TYPE=virt 00:01:04.849 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.855 RUN_NIGHTLY=0 00:01:04.858 [Pipeline] } 00:01:04.878 [Pipeline] // stage 00:01:04.897 [Pipeline] stage 00:01:04.900 [Pipeline] { (Run VM) 00:01:04.916 [Pipeline] sh 00:01:05.196 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.196 + echo 'Start stage prepare_nvme.sh' 00:01:05.196 Start stage prepare_nvme.sh 00:01:05.196 + [[ -n 7 ]] 00:01:05.196 + disk_prefix=ex7 00:01:05.196 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:05.196 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:05.196 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:05.196 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.196 ++ SPDK_TEST_NVMF=1 00:01:05.196 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.196 ++ SPDK_TEST_URING=1 00:01:05.196 ++ SPDK_TEST_USDT=1 00:01:05.196 ++ SPDK_RUN_UBSAN=1 00:01:05.196 ++ NET_TYPE=virt 00:01:05.196 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.196 ++ RUN_NIGHTLY=0 00:01:05.196 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:05.196 + nvme_files=() 00:01:05.196 + declare -A nvme_files 00:01:05.196 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.196 + nvme_files['nvme.img']=5G 00:01:05.196 + nvme_files['nvme-cmb.img']=5G 00:01:05.196 + nvme_files['nvme-multi0.img']=4G 00:01:05.196 + nvme_files['nvme-multi1.img']=4G 00:01:05.196 + nvme_files['nvme-multi2.img']=4G 00:01:05.196 + nvme_files['nvme-openstack.img']=8G 00:01:05.196 + nvme_files['nvme-zns.img']=5G 00:01:05.196 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.196 + (( SPDK_TEST_FTL == 1 )) 00:01:05.196 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.196 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.196 + for nvme in "${!nvme_files[@]}" 00:01:05.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:05.196 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.196 + for nvme in "${!nvme_files[@]}" 00:01:05.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:05.763 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.763 + for nvme in "${!nvme_files[@]}" 00:01:05.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:05.763 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.763 + for nvme in "${!nvme_files[@]}" 00:01:05.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:05.763 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.763 + for nvme in "${!nvme_files[@]}" 00:01:05.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:05.763 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.763 + for nvme in "${!nvme_files[@]}" 00:01:05.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:05.763 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.763 + for nvme in "${!nvme_files[@]}" 00:01:05.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:06.696 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.696 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:06.696 + echo 'End stage prepare_nvme.sh' 00:01:06.696 End stage prepare_nvme.sh 00:01:06.707 [Pipeline] sh 00:01:06.985 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:06.986 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:06.986 00:01:06.986 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:06.986 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:06.986 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:06.986 HELP=0 00:01:06.986 DRY_RUN=0 00:01:06.986 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:06.986 NVME_DISKS_TYPE=nvme,nvme, 00:01:06.986 NVME_AUTO_CREATE=0 00:01:06.986 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:06.986 NVME_CMB=,, 00:01:06.986 NVME_PMR=,, 00:01:06.986 NVME_ZNS=,, 00:01:06.986 NVME_MS=,, 00:01:06.986 NVME_FDP=,, 00:01:06.986 SPDK_VAGRANT_DISTRO=fedora38 00:01:06.986 SPDK_VAGRANT_VMCPU=10 00:01:06.986 SPDK_VAGRANT_VMRAM=12288 00:01:06.986 SPDK_VAGRANT_PROVIDER=libvirt 00:01:06.986 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:06.986 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:06.986 SPDK_OPENSTACK_NETWORK=0 00:01:06.986 VAGRANT_PACKAGE_BOX=0 00:01:06.986 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:06.986 FORCE_DISTRO=true 00:01:06.986 VAGRANT_BOX_VERSION= 00:01:06.986 EXTRA_VAGRANTFILES= 00:01:06.986 NIC_MODEL=e1000 00:01:06.986 00:01:06.986 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:06.986 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:09.540 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.472 ==> default: Creating image (snapshot of base box volume). 00:01:10.472 ==> default: Creating domain with the following settings... 00:01:10.472 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721082447_504708fcee981d89093f 00:01:10.472 ==> default: -- Domain type: kvm 00:01:10.472 ==> default: -- Cpus: 10 00:01:10.472 ==> default: -- Feature: acpi 00:01:10.472 ==> default: -- Feature: apic 00:01:10.472 ==> default: -- Feature: pae 00:01:10.472 ==> default: -- Memory: 12288M 00:01:10.472 ==> default: -- Memory Backing: hugepages: 00:01:10.472 ==> default: -- Management MAC: 00:01:10.472 ==> default: -- Loader: 00:01:10.472 ==> default: -- Nvram: 00:01:10.472 ==> default: -- Base box: spdk/fedora38 00:01:10.472 ==> default: -- Storage pool: default 00:01:10.472 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721082447_504708fcee981d89093f.img (20G) 00:01:10.472 ==> default: -- Volume Cache: default 00:01:10.472 ==> default: -- Kernel: 00:01:10.472 ==> default: -- Initrd: 00:01:10.472 ==> default: -- Graphics Type: vnc 00:01:10.472 ==> default: -- Graphics Port: -1 00:01:10.472 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.472 ==> default: -- Graphics Password: Not defined 00:01:10.472 ==> default: -- Video Type: cirrus 00:01:10.472 ==> default: -- Video VRAM: 9216 00:01:10.472 ==> default: -- Sound Type: 00:01:10.472 ==> default: -- Keymap: en-us 00:01:10.472 ==> default: -- TPM Path: 00:01:10.472 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.472 ==> default: -- Command line args: 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.472 ==> default: -> value=-drive, 00:01:10.472 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.472 ==> default: -> value=-drive, 00:01:10.472 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.472 ==> default: -> value=-drive, 00:01:10.472 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.472 ==> default: -> value=-drive, 00:01:10.472 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:10.472 ==> default: -> value=-device, 00:01:10.472 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.472 ==> default: Creating shared folders metadata... 00:01:10.472 ==> default: Starting domain. 00:01:12.368 ==> default: Waiting for domain to get an IP address... 00:01:27.278 ==> default: Waiting for SSH to become available... 00:01:29.211 ==> default: Configuring and enabling network interfaces... 00:01:33.394 default: SSH address: 192.168.121.41:22 00:01:33.394 default: SSH username: vagrant 00:01:33.394 default: SSH auth method: private key 00:01:35.297 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.402 ==> default: Mounting SSHFS shared folder... 00:01:45.300 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.300 ==> default: Checking Mount.. 00:01:46.320 ==> default: Folder Successfully Mounted! 00:01:46.320 ==> default: Running provisioner: file... 00:01:47.297 default: ~/.gitconfig => .gitconfig 00:01:47.555 00:01:47.555 SUCCESS! 00:01:47.555 00:01:47.555 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:47.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:47.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:47.555 00:01:47.563 [Pipeline] } 00:01:47.580 [Pipeline] // stage 00:01:47.588 [Pipeline] dir 00:01:47.588 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:47.590 [Pipeline] { 00:01:47.602 [Pipeline] catchError 00:01:47.604 [Pipeline] { 00:01:47.617 [Pipeline] sh 00:01:47.894 + vagrant ssh-config --host vagrant 00:01:47.894 + sed -ne /^Host/,$p 00:01:47.894 + tee ssh_conf 00:01:51.209 Host vagrant 00:01:51.209 HostName 192.168.121.41 00:01:51.209 User vagrant 00:01:51.209 Port 22 00:01:51.209 UserKnownHostsFile /dev/null 00:01:51.209 StrictHostKeyChecking no 00:01:51.209 PasswordAuthentication no 00:01:51.209 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:51.209 IdentitiesOnly yes 00:01:51.209 LogLevel FATAL 00:01:51.209 ForwardAgent yes 00:01:51.209 ForwardX11 yes 00:01:51.209 00:01:51.223 [Pipeline] withEnv 00:01:51.225 [Pipeline] { 00:01:51.239 [Pipeline] sh 00:01:51.514 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:51.514 source /etc/os-release 00:01:51.514 [[ -e /image.version ]] && img=$(< /image.version) 00:01:51.514 # Minimal, systemd-like check. 00:01:51.514 if [[ -e /.dockerenv ]]; then 00:01:51.514 # Clear garbage from the node's name: 00:01:51.514 # agt-er_autotest_547-896 -> autotest_547-896 00:01:51.514 # $HOSTNAME is the actual container id 00:01:51.514 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:51.514 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:51.514 # We can assume this is a mount from a host where container is running, 00:01:51.514 # so fetch its hostname to easily identify the target swarm worker. 00:01:51.514 container="$(< /etc/hostname) ($agent)" 00:01:51.514 else 00:01:51.514 # Fallback 00:01:51.514 container=$agent 00:01:51.514 fi 00:01:51.514 fi 00:01:51.514 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:51.514 00:01:51.784 [Pipeline] } 00:01:51.832 [Pipeline] // withEnv 00:01:51.843 [Pipeline] setCustomBuildProperty 00:01:51.857 [Pipeline] stage 00:01:51.859 [Pipeline] { (Tests) 00:01:51.880 [Pipeline] sh 00:01:52.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:52.435 [Pipeline] sh 00:01:52.775 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:52.789 [Pipeline] timeout 00:01:52.790 Timeout set to expire in 30 min 00:01:52.791 [Pipeline] { 00:01:52.808 [Pipeline] sh 00:01:53.088 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:53.653 HEAD is now at e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:53.666 [Pipeline] sh 00:01:53.944 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:54.214 [Pipeline] sh 00:01:54.492 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:54.768 [Pipeline] sh 00:01:55.047 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:55.307 ++ readlink -f spdk_repo 00:01:55.307 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:55.307 + [[ -n /home/vagrant/spdk_repo ]] 00:01:55.307 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:55.307 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:55.307 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:55.307 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:55.307 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:55.307 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:55.307 + cd /home/vagrant/spdk_repo 00:01:55.307 + source /etc/os-release 00:01:55.307 ++ NAME='Fedora Linux' 00:01:55.307 ++ VERSION='38 (Cloud Edition)' 00:01:55.307 ++ ID=fedora 00:01:55.307 ++ VERSION_ID=38 00:01:55.307 ++ VERSION_CODENAME= 00:01:55.307 ++ PLATFORM_ID=platform:f38 00:01:55.307 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:55.307 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:55.307 ++ LOGO=fedora-logo-icon 00:01:55.307 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:55.307 ++ HOME_URL=https://fedoraproject.org/ 00:01:55.307 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:55.307 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:55.307 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:55.307 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:55.307 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:55.307 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:55.307 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:55.307 ++ SUPPORT_END=2024-05-14 00:01:55.307 ++ VARIANT='Cloud Edition' 00:01:55.307 ++ VARIANT_ID=cloud 00:01:55.307 + uname -a 00:01:55.307 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:55.307 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:55.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:55.824 Hugepages 00:01:55.824 node hugesize free / total 00:01:55.824 node0 1048576kB 0 / 0 00:01:55.824 node0 2048kB 0 / 0 00:01:55.824 00:01:55.824 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.824 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:55.824 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.824 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:55.824 + rm -f /tmp/spdk-ld-path 00:01:55.824 + source autorun-spdk.conf 00:01:55.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.824 ++ SPDK_TEST_NVMF=1 00:01:55.824 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.824 ++ SPDK_TEST_URING=1 00:01:55.824 ++ SPDK_TEST_USDT=1 00:01:55.824 ++ SPDK_RUN_UBSAN=1 00:01:55.824 ++ NET_TYPE=virt 00:01:55.824 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.824 ++ RUN_NIGHTLY=0 00:01:55.824 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.824 + [[ -n '' ]] 00:01:55.824 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.824 + for M in /var/spdk/build-*-manifest.txt 00:01:55.824 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.824 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.824 + for M in /var/spdk/build-*-manifest.txt 00:01:55.824 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.824 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.824 ++ uname 00:01:55.824 + [[ Linux == \L\i\n\u\x ]] 00:01:55.824 + sudo dmesg -T 00:01:55.824 + sudo dmesg --clear 00:01:55.824 + dmesg_pid=5100 00:01:55.824 + [[ Fedora Linux == FreeBSD ]] 00:01:55.824 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.825 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.825 + sudo dmesg -Tw 00:01:55.825 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.825 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.825 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.825 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.825 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.825 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.825 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.825 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.825 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.825 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.825 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.825 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.825 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.825 Test configuration: 00:01:55.825 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.825 SPDK_TEST_NVMF=1 00:01:55.825 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.825 SPDK_TEST_URING=1 00:01:55.825 SPDK_TEST_USDT=1 00:01:55.825 SPDK_RUN_UBSAN=1 00:01:55.825 NET_TYPE=virt 00:01:55.825 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.083 RUN_NIGHTLY=0 22:28:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:56.083 22:28:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:56.083 22:28:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.083 22:28:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.083 22:28:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.083 22:28:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.083 22:28:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.083 22:28:13 -- paths/export.sh@5 -- $ export PATH 00:01:56.083 22:28:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.083 22:28:13 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:56.083 22:28:13 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:56.083 22:28:13 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082493.XXXXXX 00:01:56.083 22:28:13 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082493.WH3qAJ 00:01:56.083 22:28:13 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:56.083 22:28:13 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:56.083 22:28:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:56.083 22:28:13 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:56.083 22:28:13 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:56.083 22:28:13 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:56.083 22:28:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:56.083 22:28:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.083 22:28:13 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:56.083 22:28:13 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:56.083 22:28:13 -- pm/common@17 -- $ local monitor 00:01:56.083 22:28:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.083 22:28:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.083 22:28:13 -- pm/common@25 -- $ sleep 1 00:01:56.083 22:28:13 -- pm/common@21 -- $ date +%s 00:01:56.083 22:28:13 -- pm/common@21 -- $ date +%s 00:01:56.083 22:28:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721082493 00:01:56.083 22:28:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721082493 00:01:56.083 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721082493_collect-vmstat.pm.log 00:01:56.083 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721082493_collect-cpu-load.pm.log 00:01:57.020 22:28:14 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:57.020 22:28:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.020 22:28:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.020 22:28:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:57.020 22:28:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.020 Mon Jul 15 10:28:14 PM UTC 2024 00:01:57.020 22:28:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.020 v24.09-pre-235-ge9e51ebfe 00:01:57.020 22:28:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:57.020 22:28:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.020 22:28:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.020 22:28:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:57.020 22:28:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:57.020 22:28:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.020 ************************************ 00:01:57.020 START TEST ubsan 00:01:57.020 ************************************ 00:01:57.020 using ubsan 00:01:57.020 22:28:14 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:57.020 00:01:57.020 real 0m0.000s 00:01:57.020 user 0m0.000s 00:01:57.020 sys 0m0.000s 00:01:57.020 22:28:14 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:57.020 22:28:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.020 ************************************ 00:01:57.020 END TEST ubsan 00:01:57.020 ************************************ 00:01:57.020 22:28:14 -- common/autotest_common.sh@1142 -- $ return 0 00:01:57.020 22:28:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.020 22:28:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.020 22:28:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.020 22:28:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:57.278 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:57.278 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.536 Using 'verbs' RDMA provider 00:02:13.382 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:25.604 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:25.604 Creating mk/config.mk...done. 00:02:25.604 Creating mk/cc.flags.mk...done. 00:02:25.604 Type 'make' to build. 00:02:25.604 22:28:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:25.604 22:28:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:25.604 22:28:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.604 22:28:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.604 ************************************ 00:02:25.604 START TEST make 00:02:25.604 ************************************ 00:02:25.604 22:28:43 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:25.862 make[1]: Nothing to be done for 'all'. 00:02:35.845 The Meson build system 00:02:35.845 Version: 1.3.1 00:02:35.845 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:35.845 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:35.845 Build type: native build 00:02:35.845 Program cat found: YES (/usr/bin/cat) 00:02:35.845 Project name: DPDK 00:02:35.845 Project version: 24.03.0 00:02:35.845 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:35.845 C linker for the host machine: cc ld.bfd 2.39-16 00:02:35.845 Host machine cpu family: x86_64 00:02:35.845 Host machine cpu: x86_64 00:02:35.845 Message: ## Building in Developer Mode ## 00:02:35.845 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.845 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:35.845 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.845 Program python3 found: YES (/usr/bin/python3) 00:02:35.845 Program cat found: YES (/usr/bin/cat) 00:02:35.845 Compiler for C supports arguments -march=native: YES 00:02:35.845 Checking for size of "void *" : 8 00:02:35.845 Checking for size of "void *" : 8 (cached) 00:02:35.845 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:35.845 Library m found: YES 00:02:35.845 Library numa found: YES 00:02:35.845 Has header "numaif.h" : YES 00:02:35.845 Library fdt found: NO 00:02:35.845 Library execinfo found: NO 00:02:35.845 Has header "execinfo.h" : YES 00:02:35.845 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:35.845 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.845 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.845 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.845 Run-time dependency openssl found: YES 3.0.9 00:02:35.845 Run-time dependency libpcap found: YES 1.10.4 00:02:35.845 Has header "pcap.h" with dependency libpcap: YES 00:02:35.845 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.845 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.845 Compiler for C supports arguments -Wformat: YES 00:02:35.845 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:35.845 Compiler for C supports arguments -Wformat-security: NO 00:02:35.845 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.845 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.845 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.845 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.845 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.845 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.845 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.845 Compiler for C supports arguments -Wundef: YES 00:02:35.845 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.845 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.845 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.845 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.845 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.845 Program objdump found: YES (/usr/bin/objdump) 00:02:35.845 Compiler for C supports arguments -mavx512f: YES 00:02:35.845 Checking if "AVX512 checking" compiles: YES 00:02:35.845 Fetching value of define "__SSE4_2__" : 1 00:02:35.845 Fetching value of define "__AES__" : 1 00:02:35.845 Fetching value of define "__AVX__" : 1 00:02:35.845 Fetching value of define "__AVX2__" : 1 00:02:35.845 Fetching value of define "__AVX512BW__" : (undefined) 00:02:35.845 Fetching value of define "__AVX512CD__" : (undefined) 00:02:35.845 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:35.845 Fetching value of define "__AVX512F__" : (undefined) 00:02:35.845 Fetching value of define "__AVX512VL__" : (undefined) 00:02:35.845 Fetching value of define "__PCLMUL__" : 1 00:02:35.845 Fetching value of define "__RDRND__" : 1 00:02:35.845 Fetching value of define "__RDSEED__" : 1 00:02:35.845 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:35.845 Fetching value of define "__znver1__" : (undefined) 00:02:35.845 Fetching value of define "__znver2__" : (undefined) 00:02:35.845 Fetching value of define "__znver3__" : (undefined) 00:02:35.845 Fetching value of define "__znver4__" : (undefined) 00:02:35.845 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.845 Message: lib/log: Defining dependency "log" 00:02:35.845 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.846 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.846 Checking for function "getentropy" : NO 00:02:35.846 Message: lib/eal: Defining dependency "eal" 00:02:35.846 Message: lib/ring: Defining dependency "ring" 00:02:35.846 Message: lib/rcu: Defining dependency "rcu" 00:02:35.846 Message: lib/mempool: Defining dependency "mempool" 00:02:35.846 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.846 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.846 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.846 Compiler for C supports arguments -mpclmul: YES 00:02:35.846 Compiler for C supports arguments -maes: YES 00:02:35.846 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.846 Compiler for C supports arguments -mavx512bw: YES 00:02:35.846 Compiler for C supports arguments -mavx512dq: YES 00:02:35.846 Compiler for C supports arguments -mavx512vl: YES 00:02:35.846 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.846 Compiler for C supports arguments -mavx2: YES 00:02:35.846 Compiler for C supports arguments -mavx: YES 00:02:35.846 Message: lib/net: Defining dependency "net" 00:02:35.846 Message: lib/meter: Defining dependency "meter" 00:02:35.846 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.846 Message: lib/pci: Defining dependency "pci" 00:02:35.846 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.846 Message: lib/hash: Defining dependency "hash" 00:02:35.846 Message: lib/timer: Defining dependency "timer" 00:02:35.846 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.846 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.846 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.846 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.846 Message: lib/power: Defining dependency "power" 00:02:35.846 Message: lib/reorder: Defining dependency "reorder" 00:02:35.846 Message: lib/security: Defining dependency "security" 00:02:35.846 Has header "linux/userfaultfd.h" : YES 00:02:35.846 Has header "linux/vduse.h" : YES 00:02:35.846 Message: lib/vhost: Defining dependency "vhost" 00:02:35.846 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.846 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.846 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.846 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.846 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:35.846 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:35.846 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:35.846 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:35.846 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:35.846 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:35.846 Program doxygen found: YES (/usr/bin/doxygen) 00:02:35.846 Configuring doxy-api-html.conf using configuration 00:02:35.846 Configuring doxy-api-man.conf using configuration 00:02:35.846 Program mandb found: YES (/usr/bin/mandb) 00:02:35.846 Program sphinx-build found: NO 00:02:35.846 Configuring rte_build_config.h using configuration 00:02:35.846 Message: 00:02:35.846 ================= 00:02:35.846 Applications Enabled 00:02:35.846 ================= 00:02:35.846 00:02:35.846 apps: 00:02:35.846 00:02:35.846 00:02:35.846 Message: 00:02:35.846 ================= 00:02:35.846 Libraries Enabled 00:02:35.846 ================= 00:02:35.846 00:02:35.846 libs: 00:02:35.846 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.846 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:35.846 cryptodev, dmadev, power, reorder, security, vhost, 00:02:35.846 00:02:35.846 Message: 00:02:35.846 =============== 00:02:35.846 Drivers Enabled 00:02:35.846 =============== 00:02:35.846 00:02:35.846 common: 00:02:35.846 00:02:35.846 bus: 00:02:35.846 pci, vdev, 00:02:35.846 mempool: 00:02:35.846 ring, 00:02:35.846 dma: 00:02:35.846 00:02:35.846 net: 00:02:35.846 00:02:35.846 crypto: 00:02:35.846 00:02:35.846 compress: 00:02:35.846 00:02:35.846 vdpa: 00:02:35.846 00:02:35.846 00:02:35.846 Message: 00:02:35.846 ================= 00:02:35.846 Content Skipped 00:02:35.846 ================= 00:02:35.846 00:02:35.846 apps: 00:02:35.846 dumpcap: explicitly disabled via build config 00:02:35.846 graph: explicitly disabled via build config 00:02:35.846 pdump: explicitly disabled via build config 00:02:35.846 proc-info: explicitly disabled via build config 00:02:35.846 test-acl: explicitly disabled via build config 00:02:35.846 test-bbdev: explicitly disabled via build config 00:02:35.846 test-cmdline: explicitly disabled via build config 00:02:35.846 test-compress-perf: explicitly disabled via build config 00:02:35.846 test-crypto-perf: explicitly disabled via build config 00:02:35.846 test-dma-perf: explicitly disabled via build config 00:02:35.846 test-eventdev: explicitly disabled via build config 00:02:35.846 test-fib: explicitly disabled via build config 00:02:35.846 test-flow-perf: explicitly disabled via build config 00:02:35.846 test-gpudev: explicitly disabled via build config 00:02:35.846 test-mldev: explicitly disabled via build config 00:02:35.846 test-pipeline: explicitly disabled via build config 00:02:35.846 test-pmd: explicitly disabled via build config 00:02:35.846 test-regex: explicitly disabled via build config 00:02:35.846 test-sad: explicitly disabled via build config 00:02:35.846 test-security-perf: explicitly disabled via build config 00:02:35.846 00:02:35.846 libs: 00:02:35.846 argparse: explicitly disabled via build config 00:02:35.846 metrics: explicitly disabled via build config 00:02:35.846 acl: explicitly disabled via build config 00:02:35.846 bbdev: explicitly disabled via build config 00:02:35.846 bitratestats: explicitly disabled via build config 00:02:35.846 bpf: explicitly disabled via build config 00:02:35.846 cfgfile: explicitly disabled via build config 00:02:35.846 distributor: explicitly disabled via build config 00:02:35.846 efd: explicitly disabled via build config 00:02:35.846 eventdev: explicitly disabled via build config 00:02:35.846 dispatcher: explicitly disabled via build config 00:02:35.846 gpudev: explicitly disabled via build config 00:02:35.846 gro: explicitly disabled via build config 00:02:35.846 gso: explicitly disabled via build config 00:02:35.846 ip_frag: explicitly disabled via build config 00:02:35.846 jobstats: explicitly disabled via build config 00:02:35.846 latencystats: explicitly disabled via build config 00:02:35.846 lpm: explicitly disabled via build config 00:02:35.846 member: explicitly disabled via build config 00:02:35.846 pcapng: explicitly disabled via build config 00:02:35.846 rawdev: explicitly disabled via build config 00:02:35.846 regexdev: explicitly disabled via build config 00:02:35.846 mldev: explicitly disabled via build config 00:02:35.846 rib: explicitly disabled via build config 00:02:35.846 sched: explicitly disabled via build config 00:02:35.846 stack: explicitly disabled via build config 00:02:35.846 ipsec: explicitly disabled via build config 00:02:35.846 pdcp: explicitly disabled via build config 00:02:35.846 fib: explicitly disabled via build config 00:02:35.846 port: explicitly disabled via build config 00:02:35.846 pdump: explicitly disabled via build config 00:02:35.846 table: explicitly disabled via build config 00:02:35.846 pipeline: explicitly disabled via build config 00:02:35.846 graph: explicitly disabled via build config 00:02:35.846 node: explicitly disabled via build config 00:02:35.846 00:02:35.846 drivers: 00:02:35.846 common/cpt: not in enabled drivers build config 00:02:35.846 common/dpaax: not in enabled drivers build config 00:02:35.846 common/iavf: not in enabled drivers build config 00:02:35.846 common/idpf: not in enabled drivers build config 00:02:35.846 common/ionic: not in enabled drivers build config 00:02:35.846 common/mvep: not in enabled drivers build config 00:02:35.846 common/octeontx: not in enabled drivers build config 00:02:35.846 bus/auxiliary: not in enabled drivers build config 00:02:35.846 bus/cdx: not in enabled drivers build config 00:02:35.846 bus/dpaa: not in enabled drivers build config 00:02:35.846 bus/fslmc: not in enabled drivers build config 00:02:35.846 bus/ifpga: not in enabled drivers build config 00:02:35.846 bus/platform: not in enabled drivers build config 00:02:35.846 bus/uacce: not in enabled drivers build config 00:02:35.846 bus/vmbus: not in enabled drivers build config 00:02:35.846 common/cnxk: not in enabled drivers build config 00:02:35.846 common/mlx5: not in enabled drivers build config 00:02:35.846 common/nfp: not in enabled drivers build config 00:02:35.846 common/nitrox: not in enabled drivers build config 00:02:35.846 common/qat: not in enabled drivers build config 00:02:35.846 common/sfc_efx: not in enabled drivers build config 00:02:35.846 mempool/bucket: not in enabled drivers build config 00:02:35.846 mempool/cnxk: not in enabled drivers build config 00:02:35.846 mempool/dpaa: not in enabled drivers build config 00:02:35.846 mempool/dpaa2: not in enabled drivers build config 00:02:35.846 mempool/octeontx: not in enabled drivers build config 00:02:35.846 mempool/stack: not in enabled drivers build config 00:02:35.846 dma/cnxk: not in enabled drivers build config 00:02:35.846 dma/dpaa: not in enabled drivers build config 00:02:35.846 dma/dpaa2: not in enabled drivers build config 00:02:35.846 dma/hisilicon: not in enabled drivers build config 00:02:35.846 dma/idxd: not in enabled drivers build config 00:02:35.846 dma/ioat: not in enabled drivers build config 00:02:35.846 dma/skeleton: not in enabled drivers build config 00:02:35.846 net/af_packet: not in enabled drivers build config 00:02:35.846 net/af_xdp: not in enabled drivers build config 00:02:35.846 net/ark: not in enabled drivers build config 00:02:35.846 net/atlantic: not in enabled drivers build config 00:02:35.846 net/avp: not in enabled drivers build config 00:02:35.846 net/axgbe: not in enabled drivers build config 00:02:35.846 net/bnx2x: not in enabled drivers build config 00:02:35.846 net/bnxt: not in enabled drivers build config 00:02:35.846 net/bonding: not in enabled drivers build config 00:02:35.846 net/cnxk: not in enabled drivers build config 00:02:35.846 net/cpfl: not in enabled drivers build config 00:02:35.846 net/cxgbe: not in enabled drivers build config 00:02:35.846 net/dpaa: not in enabled drivers build config 00:02:35.846 net/dpaa2: not in enabled drivers build config 00:02:35.846 net/e1000: not in enabled drivers build config 00:02:35.846 net/ena: not in enabled drivers build config 00:02:35.846 net/enetc: not in enabled drivers build config 00:02:35.846 net/enetfec: not in enabled drivers build config 00:02:35.846 net/enic: not in enabled drivers build config 00:02:35.846 net/failsafe: not in enabled drivers build config 00:02:35.846 net/fm10k: not in enabled drivers build config 00:02:35.846 net/gve: not in enabled drivers build config 00:02:35.846 net/hinic: not in enabled drivers build config 00:02:35.846 net/hns3: not in enabled drivers build config 00:02:35.846 net/i40e: not in enabled drivers build config 00:02:35.846 net/iavf: not in enabled drivers build config 00:02:35.846 net/ice: not in enabled drivers build config 00:02:35.846 net/idpf: not in enabled drivers build config 00:02:35.846 net/igc: not in enabled drivers build config 00:02:35.847 net/ionic: not in enabled drivers build config 00:02:35.847 net/ipn3ke: not in enabled drivers build config 00:02:35.847 net/ixgbe: not in enabled drivers build config 00:02:35.847 net/mana: not in enabled drivers build config 00:02:35.847 net/memif: not in enabled drivers build config 00:02:35.847 net/mlx4: not in enabled drivers build config 00:02:35.847 net/mlx5: not in enabled drivers build config 00:02:35.847 net/mvneta: not in enabled drivers build config 00:02:35.847 net/mvpp2: not in enabled drivers build config 00:02:35.847 net/netvsc: not in enabled drivers build config 00:02:35.847 net/nfb: not in enabled drivers build config 00:02:35.847 net/nfp: not in enabled drivers build config 00:02:35.847 net/ngbe: not in enabled drivers build config 00:02:35.847 net/null: not in enabled drivers build config 00:02:35.847 net/octeontx: not in enabled drivers build config 00:02:35.847 net/octeon_ep: not in enabled drivers build config 00:02:35.847 net/pcap: not in enabled drivers build config 00:02:35.847 net/pfe: not in enabled drivers build config 00:02:35.847 net/qede: not in enabled drivers build config 00:02:35.847 net/ring: not in enabled drivers build config 00:02:35.847 net/sfc: not in enabled drivers build config 00:02:35.847 net/softnic: not in enabled drivers build config 00:02:35.847 net/tap: not in enabled drivers build config 00:02:35.847 net/thunderx: not in enabled drivers build config 00:02:35.847 net/txgbe: not in enabled drivers build config 00:02:35.847 net/vdev_netvsc: not in enabled drivers build config 00:02:35.847 net/vhost: not in enabled drivers build config 00:02:35.847 net/virtio: not in enabled drivers build config 00:02:35.847 net/vmxnet3: not in enabled drivers build config 00:02:35.847 raw/*: missing internal dependency, "rawdev" 00:02:35.847 crypto/armv8: not in enabled drivers build config 00:02:35.847 crypto/bcmfs: not in enabled drivers build config 00:02:35.847 crypto/caam_jr: not in enabled drivers build config 00:02:35.847 crypto/ccp: not in enabled drivers build config 00:02:35.847 crypto/cnxk: not in enabled drivers build config 00:02:35.847 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.847 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.847 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.847 crypto/mlx5: not in enabled drivers build config 00:02:35.847 crypto/mvsam: not in enabled drivers build config 00:02:35.847 crypto/nitrox: not in enabled drivers build config 00:02:35.847 crypto/null: not in enabled drivers build config 00:02:35.847 crypto/octeontx: not in enabled drivers build config 00:02:35.847 crypto/openssl: not in enabled drivers build config 00:02:35.847 crypto/scheduler: not in enabled drivers build config 00:02:35.847 crypto/uadk: not in enabled drivers build config 00:02:35.847 crypto/virtio: not in enabled drivers build config 00:02:35.847 compress/isal: not in enabled drivers build config 00:02:35.847 compress/mlx5: not in enabled drivers build config 00:02:35.847 compress/nitrox: not in enabled drivers build config 00:02:35.847 compress/octeontx: not in enabled drivers build config 00:02:35.847 compress/zlib: not in enabled drivers build config 00:02:35.847 regex/*: missing internal dependency, "regexdev" 00:02:35.847 ml/*: missing internal dependency, "mldev" 00:02:35.847 vdpa/ifc: not in enabled drivers build config 00:02:35.847 vdpa/mlx5: not in enabled drivers build config 00:02:35.847 vdpa/nfp: not in enabled drivers build config 00:02:35.847 vdpa/sfc: not in enabled drivers build config 00:02:35.847 event/*: missing internal dependency, "eventdev" 00:02:35.847 baseband/*: missing internal dependency, "bbdev" 00:02:35.847 gpu/*: missing internal dependency, "gpudev" 00:02:35.847 00:02:35.847 00:02:36.413 Build targets in project: 85 00:02:36.413 00:02:36.413 DPDK 24.03.0 00:02:36.413 00:02:36.413 User defined options 00:02:36.413 buildtype : debug 00:02:36.413 default_library : shared 00:02:36.413 libdir : lib 00:02:36.413 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:36.413 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:36.413 c_link_args : 00:02:36.413 cpu_instruction_set: native 00:02:36.413 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:36.413 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:36.413 enable_docs : false 00:02:36.413 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.413 enable_kmods : false 00:02:36.413 max_lcores : 128 00:02:36.413 tests : false 00:02:36.413 00:02:36.413 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.671 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:36.928 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:36.928 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.928 [3/268] Linking static target lib/librte_kvargs.a 00:02:36.928 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.928 [5/268] Linking static target lib/librte_log.a 00:02:36.928 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.494 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.494 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.494 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.494 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.494 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.753 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.753 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.753 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.753 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.753 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.753 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.753 [18/268] Linking static target lib/librte_telemetry.a 00:02:37.753 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.012 [20/268] Linking target lib/librte_log.so.24.1 00:02:38.295 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.295 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.295 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:38.295 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.563 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.563 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.563 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:38.563 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.563 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.563 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.563 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.563 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.821 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:38.821 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.821 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.821 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.821 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.388 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.388 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.388 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.388 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.388 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.388 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.388 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.388 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.646 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.647 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.647 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.905 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.905 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.905 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.162 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.162 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.420 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.420 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.678 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.678 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.678 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.678 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.678 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.936 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.936 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.936 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.936 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.194 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.194 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.451 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.451 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.709 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.709 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.709 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.709 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.709 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.967 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.967 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.967 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.967 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.226 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.226 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.483 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.483 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.483 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.740 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.740 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.740 [85/268] Linking static target lib/librte_eal.a 00:02:42.740 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.997 [87/268] Linking static target lib/librte_ring.a 00:02:43.256 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.256 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.256 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.256 [91/268] Linking static target lib/librte_rcu.a 00:02:43.256 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.256 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.256 [94/268] Linking static target lib/librte_mempool.a 00:02:43.514 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.514 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.514 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.772 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.772 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.772 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.772 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.338 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.338 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.338 [104/268] Linking static target lib/librte_mbuf.a 00:02:44.338 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.338 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.338 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.595 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.595 [109/268] Linking static target lib/librte_net.a 00:02:44.852 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.853 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.853 [112/268] Linking static target lib/librte_meter.a 00:02:44.853 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.111 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.111 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.111 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.111 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.369 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.369 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.631 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.908 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.908 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.166 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.166 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.166 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.166 [126/268] Linking static target lib/librte_pci.a 00:02:46.166 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.425 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.425 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.425 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.425 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.425 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.684 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.684 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.684 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.684 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.684 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:46.684 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.684 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.684 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.684 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.684 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.684 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:46.684 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.684 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.942 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.942 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.201 [148/268] Linking static target lib/librte_cmdline.a 00:02:47.201 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.201 [150/268] Linking static target lib/librte_ethdev.a 00:02:47.459 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.459 [152/268] Linking static target lib/librte_timer.a 00:02:47.459 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.459 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.717 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.718 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.718 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.718 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.718 [159/268] Linking static target lib/librte_hash.a 00:02:47.976 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.976 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.234 [162/268] Linking static target lib/librte_compressdev.a 00:02:48.234 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.234 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.493 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.493 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.493 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.493 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.493 [169/268] Linking static target lib/librte_dmadev.a 00:02:48.751 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.751 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.009 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.009 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.009 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.009 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.009 [176/268] Linking static target lib/librte_cryptodev.a 00:02:49.009 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.268 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.526 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.526 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.526 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.526 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.526 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.784 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.784 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.784 [186/268] Linking static target lib/librte_power.a 00:02:50.043 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.043 [188/268] Linking static target lib/librte_reorder.a 00:02:50.301 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.301 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.301 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.301 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.301 [193/268] Linking static target lib/librte_security.a 00:02:50.560 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.560 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.128 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.128 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.128 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.128 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.128 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.386 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.386 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.645 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.645 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.645 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.904 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.904 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.904 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.904 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.904 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.163 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.163 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.163 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.163 [214/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.163 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:52.163 [216/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.422 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.422 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.422 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.422 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:52.422 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.422 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.681 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.681 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.681 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.681 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.681 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.681 [228/268] Linking static target drivers/librte_mempool_ring.a 00:02:53.248 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.248 [230/268] Linking static target lib/librte_vhost.a 00:02:54.184 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.184 [232/268] Linking target lib/librte_eal.so.24.1 00:02:54.184 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:54.184 [234/268] Linking target lib/librte_ring.so.24.1 00:02:54.184 [235/268] Linking target lib/librte_pci.so.24.1 00:02:54.184 [236/268] Linking target lib/librte_meter.so.24.1 00:02:54.184 [237/268] Linking target lib/librte_timer.so.24.1 00:02:54.184 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:54.184 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:54.442 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:54.442 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:54.442 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:54.442 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:54.442 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:54.442 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:54.442 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:54.442 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:54.700 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:54.700 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:54.700 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.700 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:54.700 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.700 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.959 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:54.959 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.959 [256/268] Linking target lib/librte_net.so.24.1 00:02:54.959 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.959 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.959 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:54.959 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.218 [261/268] Linking target lib/librte_hash.so.24.1 00:02:55.218 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.218 [263/268] Linking target lib/librte_security.so.24.1 00:02:55.218 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.218 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:55.218 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:55.218 [267/268] Linking target lib/librte_power.so.24.1 00:02:55.477 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:55.477 INFO: autodetecting backend as ninja 00:02:55.477 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:56.409 CC lib/log/log.o 00:02:56.409 CC lib/ut/ut.o 00:02:56.409 CC lib/log/log_flags.o 00:02:56.409 CC lib/log/log_deprecated.o 00:02:56.409 CC lib/ut_mock/mock.o 00:02:56.668 LIB libspdk_ut.a 00:02:56.668 LIB libspdk_ut_mock.a 00:02:56.668 SO libspdk_ut.so.2.0 00:02:56.668 LIB libspdk_log.a 00:02:56.927 SO libspdk_ut_mock.so.6.0 00:02:56.927 SO libspdk_log.so.7.0 00:02:56.927 SYMLINK libspdk_ut_mock.so 00:02:56.927 SYMLINK libspdk_ut.so 00:02:56.927 SYMLINK libspdk_log.so 00:02:57.186 CC lib/util/base64.o 00:02:57.186 CXX lib/trace_parser/trace.o 00:02:57.186 CC lib/ioat/ioat.o 00:02:57.186 CC lib/util/bit_array.o 00:02:57.186 CC lib/util/cpuset.o 00:02:57.186 CC lib/util/crc16.o 00:02:57.186 CC lib/util/crc32.o 00:02:57.186 CC lib/dma/dma.o 00:02:57.186 CC lib/util/crc32c.o 00:02:57.445 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.445 CC lib/util/crc32_ieee.o 00:02:57.445 CC lib/util/crc64.o 00:02:57.445 CC lib/util/dif.o 00:02:57.445 CC lib/vfio_user/host/vfio_user.o 00:02:57.445 LIB libspdk_dma.a 00:02:57.445 CC lib/util/fd.o 00:02:57.445 SO libspdk_dma.so.4.0 00:02:57.445 CC lib/util/fd_group.o 00:02:57.445 SYMLINK libspdk_dma.so 00:02:57.445 CC lib/util/file.o 00:02:57.445 CC lib/util/hexlify.o 00:02:57.445 LIB libspdk_ioat.a 00:02:57.445 CC lib/util/iov.o 00:02:57.445 SO libspdk_ioat.so.7.0 00:02:57.703 CC lib/util/math.o 00:02:57.703 CC lib/util/net.o 00:02:57.703 SYMLINK libspdk_ioat.so 00:02:57.703 CC lib/util/pipe.o 00:02:57.703 LIB libspdk_vfio_user.a 00:02:57.703 CC lib/util/strerror_tls.o 00:02:57.703 CC lib/util/string.o 00:02:57.703 SO libspdk_vfio_user.so.5.0 00:02:57.703 CC lib/util/uuid.o 00:02:57.703 CC lib/util/xor.o 00:02:57.703 CC lib/util/zipf.o 00:02:57.703 SYMLINK libspdk_vfio_user.so 00:02:57.962 LIB libspdk_util.a 00:02:57.962 SO libspdk_util.so.9.1 00:02:58.220 LIB libspdk_trace_parser.a 00:02:58.220 SYMLINK libspdk_util.so 00:02:58.220 SO libspdk_trace_parser.so.5.0 00:02:58.477 SYMLINK libspdk_trace_parser.so 00:02:58.477 CC lib/idxd/idxd.o 00:02:58.477 CC lib/idxd/idxd_user.o 00:02:58.477 CC lib/rdma_provider/common.o 00:02:58.477 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.477 CC lib/env_dpdk/env.o 00:02:58.477 CC lib/idxd/idxd_kernel.o 00:02:58.477 CC lib/json/json_parse.o 00:02:58.477 CC lib/conf/conf.o 00:02:58.477 CC lib/vmd/vmd.o 00:02:58.477 CC lib/rdma_utils/rdma_utils.o 00:02:58.735 CC lib/json/json_util.o 00:02:58.735 CC lib/json/json_write.o 00:02:58.735 LIB libspdk_rdma_provider.a 00:02:58.735 LIB libspdk_conf.a 00:02:58.735 SO libspdk_rdma_provider.so.6.0 00:02:58.735 CC lib/vmd/led.o 00:02:58.735 CC lib/env_dpdk/memory.o 00:02:58.735 SO libspdk_conf.so.6.0 00:02:58.735 LIB libspdk_rdma_utils.a 00:02:58.735 SYMLINK libspdk_rdma_provider.so 00:02:58.735 CC lib/env_dpdk/pci.o 00:02:58.735 SYMLINK libspdk_conf.so 00:02:58.735 CC lib/env_dpdk/init.o 00:02:58.735 SO libspdk_rdma_utils.so.1.0 00:02:58.735 CC lib/env_dpdk/threads.o 00:02:58.735 SYMLINK libspdk_rdma_utils.so 00:02:58.735 CC lib/env_dpdk/pci_ioat.o 00:02:58.735 CC lib/env_dpdk/pci_virtio.o 00:02:58.994 LIB libspdk_json.a 00:02:58.994 SO libspdk_json.so.6.0 00:02:58.994 CC lib/env_dpdk/pci_vmd.o 00:02:58.994 LIB libspdk_idxd.a 00:02:58.994 CC lib/env_dpdk/pci_idxd.o 00:02:58.994 SYMLINK libspdk_json.so 00:02:58.994 SO libspdk_idxd.so.12.0 00:02:58.994 CC lib/env_dpdk/pci_event.o 00:02:58.994 CC lib/env_dpdk/sigbus_handler.o 00:02:58.994 LIB libspdk_vmd.a 00:02:58.994 SYMLINK libspdk_idxd.so 00:02:58.994 SO libspdk_vmd.so.6.0 00:02:58.994 CC lib/env_dpdk/pci_dpdk.o 00:02:59.259 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.259 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.259 SYMLINK libspdk_vmd.so 00:02:59.259 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.259 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.259 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.259 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.525 LIB libspdk_jsonrpc.a 00:02:59.525 SO libspdk_jsonrpc.so.6.0 00:02:59.784 SYMLINK libspdk_jsonrpc.so 00:02:59.784 LIB libspdk_env_dpdk.a 00:03:00.041 SO libspdk_env_dpdk.so.15.0 00:03:00.041 CC lib/rpc/rpc.o 00:03:00.041 SYMLINK libspdk_env_dpdk.so 00:03:00.299 LIB libspdk_rpc.a 00:03:00.299 SO libspdk_rpc.so.6.0 00:03:00.299 SYMLINK libspdk_rpc.so 00:03:00.556 CC lib/trace/trace.o 00:03:00.556 CC lib/trace/trace_rpc.o 00:03:00.556 CC lib/trace/trace_flags.o 00:03:00.556 CC lib/keyring/keyring.o 00:03:00.556 CC lib/notify/notify_rpc.o 00:03:00.556 CC lib/notify/notify.o 00:03:00.556 CC lib/keyring/keyring_rpc.o 00:03:00.814 LIB libspdk_notify.a 00:03:00.814 SO libspdk_notify.so.6.0 00:03:00.814 LIB libspdk_keyring.a 00:03:01.070 SYMLINK libspdk_notify.so 00:03:01.071 LIB libspdk_trace.a 00:03:01.071 SO libspdk_keyring.so.1.0 00:03:01.071 SO libspdk_trace.so.10.0 00:03:01.071 SYMLINK libspdk_keyring.so 00:03:01.071 SYMLINK libspdk_trace.so 00:03:01.329 CC lib/sock/sock.o 00:03:01.329 CC lib/sock/sock_rpc.o 00:03:01.329 CC lib/thread/thread.o 00:03:01.329 CC lib/thread/iobuf.o 00:03:01.894 LIB libspdk_sock.a 00:03:01.894 SO libspdk_sock.so.10.0 00:03:01.894 SYMLINK libspdk_sock.so 00:03:02.460 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.460 CC lib/nvme/nvme_ctrlr.o 00:03:02.460 CC lib/nvme/nvme_fabric.o 00:03:02.460 CC lib/nvme/nvme_ns_cmd.o 00:03:02.460 CC lib/nvme/nvme_ns.o 00:03:02.460 CC lib/nvme/nvme_pcie_common.o 00:03:02.460 CC lib/nvme/nvme_pcie.o 00:03:02.460 CC lib/nvme/nvme_qpair.o 00:03:02.460 CC lib/nvme/nvme.o 00:03:03.026 LIB libspdk_thread.a 00:03:03.026 SO libspdk_thread.so.10.1 00:03:03.026 CC lib/nvme/nvme_quirks.o 00:03:03.026 SYMLINK libspdk_thread.so 00:03:03.026 CC lib/nvme/nvme_transport.o 00:03:03.026 CC lib/nvme/nvme_discovery.o 00:03:03.285 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.285 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.285 CC lib/nvme/nvme_tcp.o 00:03:03.285 CC lib/accel/accel.o 00:03:03.285 CC lib/blob/blobstore.o 00:03:03.543 CC lib/blob/request.o 00:03:03.802 CC lib/blob/zeroes.o 00:03:03.802 CC lib/blob/blob_bs_dev.o 00:03:03.802 CC lib/accel/accel_rpc.o 00:03:03.802 CC lib/init/json_config.o 00:03:03.802 CC lib/accel/accel_sw.o 00:03:03.802 CC lib/nvme/nvme_opal.o 00:03:04.060 CC lib/nvme/nvme_io_msg.o 00:03:04.060 CC lib/virtio/virtio.o 00:03:04.060 CC lib/virtio/virtio_vhost_user.o 00:03:04.060 CC lib/init/subsystem.o 00:03:04.318 CC lib/init/subsystem_rpc.o 00:03:04.318 LIB libspdk_accel.a 00:03:04.318 CC lib/virtio/virtio_vfio_user.o 00:03:04.318 SO libspdk_accel.so.15.1 00:03:04.318 CC lib/virtio/virtio_pci.o 00:03:04.318 CC lib/init/rpc.o 00:03:04.318 CC lib/nvme/nvme_poll_group.o 00:03:04.318 SYMLINK libspdk_accel.so 00:03:04.318 CC lib/nvme/nvme_zns.o 00:03:04.576 CC lib/nvme/nvme_stubs.o 00:03:04.576 CC lib/nvme/nvme_auth.o 00:03:04.576 LIB libspdk_init.a 00:03:04.576 SO libspdk_init.so.5.0 00:03:04.576 CC lib/bdev/bdev.o 00:03:04.576 LIB libspdk_virtio.a 00:03:04.576 CC lib/bdev/bdev_rpc.o 00:03:04.576 SYMLINK libspdk_init.so 00:03:04.576 CC lib/bdev/bdev_zone.o 00:03:04.576 SO libspdk_virtio.so.7.0 00:03:04.834 SYMLINK libspdk_virtio.so 00:03:04.834 CC lib/bdev/part.o 00:03:04.834 CC lib/nvme/nvme_cuse.o 00:03:05.093 CC lib/nvme/nvme_rdma.o 00:03:05.093 CC lib/bdev/scsi_nvme.o 00:03:05.093 CC lib/event/app.o 00:03:05.093 CC lib/event/reactor.o 00:03:05.093 CC lib/event/log_rpc.o 00:03:05.093 CC lib/event/app_rpc.o 00:03:05.351 CC lib/event/scheduler_static.o 00:03:05.609 LIB libspdk_event.a 00:03:05.610 SO libspdk_event.so.14.0 00:03:05.610 SYMLINK libspdk_event.so 00:03:06.184 LIB libspdk_blob.a 00:03:06.443 SO libspdk_blob.so.11.0 00:03:06.443 LIB libspdk_nvme.a 00:03:06.443 SYMLINK libspdk_blob.so 00:03:06.701 SO libspdk_nvme.so.13.1 00:03:06.701 CC lib/blobfs/blobfs.o 00:03:06.701 CC lib/blobfs/tree.o 00:03:06.701 CC lib/lvol/lvol.o 00:03:06.960 SYMLINK libspdk_nvme.so 00:03:07.527 LIB libspdk_bdev.a 00:03:07.527 SO libspdk_bdev.so.15.1 00:03:07.527 LIB libspdk_blobfs.a 00:03:07.527 SO libspdk_blobfs.so.10.0 00:03:07.527 SYMLINK libspdk_bdev.so 00:03:07.527 SYMLINK libspdk_blobfs.so 00:03:07.785 LIB libspdk_lvol.a 00:03:07.785 CC lib/ublk/ublk.o 00:03:07.785 CC lib/ublk/ublk_rpc.o 00:03:07.785 CC lib/nvmf/ctrlr.o 00:03:07.785 CC lib/nvmf/ctrlr_discovery.o 00:03:07.785 CC lib/nvmf/ctrlr_bdev.o 00:03:07.785 CC lib/scsi/dev.o 00:03:07.785 CC lib/nbd/nbd.o 00:03:07.785 CC lib/scsi/lun.o 00:03:07.785 CC lib/ftl/ftl_core.o 00:03:07.785 SO libspdk_lvol.so.10.0 00:03:07.785 SYMLINK libspdk_lvol.so 00:03:07.785 CC lib/ftl/ftl_init.o 00:03:08.043 CC lib/ftl/ftl_layout.o 00:03:08.043 CC lib/ftl/ftl_debug.o 00:03:08.043 CC lib/nvmf/subsystem.o 00:03:08.302 CC lib/scsi/port.o 00:03:08.302 CC lib/scsi/scsi.o 00:03:08.302 CC lib/nbd/nbd_rpc.o 00:03:08.302 CC lib/ftl/ftl_io.o 00:03:08.302 CC lib/nvmf/nvmf.o 00:03:08.302 CC lib/ftl/ftl_sb.o 00:03:08.302 CC lib/ftl/ftl_l2p.o 00:03:08.302 CC lib/scsi/scsi_bdev.o 00:03:08.302 LIB libspdk_ublk.a 00:03:08.561 LIB libspdk_nbd.a 00:03:08.561 SO libspdk_ublk.so.3.0 00:03:08.561 SO libspdk_nbd.so.7.0 00:03:08.561 CC lib/ftl/ftl_l2p_flat.o 00:03:08.561 SYMLINK libspdk_ublk.so 00:03:08.561 CC lib/ftl/ftl_nv_cache.o 00:03:08.561 CC lib/ftl/ftl_band.o 00:03:08.561 CC lib/scsi/scsi_pr.o 00:03:08.561 SYMLINK libspdk_nbd.so 00:03:08.561 CC lib/ftl/ftl_band_ops.o 00:03:08.561 CC lib/nvmf/nvmf_rpc.o 00:03:08.819 CC lib/ftl/ftl_writer.o 00:03:08.819 CC lib/ftl/ftl_rq.o 00:03:08.819 CC lib/scsi/scsi_rpc.o 00:03:08.819 CC lib/ftl/ftl_reloc.o 00:03:08.819 CC lib/nvmf/transport.o 00:03:09.077 CC lib/nvmf/tcp.o 00:03:09.077 CC lib/ftl/ftl_l2p_cache.o 00:03:09.077 CC lib/scsi/task.o 00:03:09.077 CC lib/nvmf/stubs.o 00:03:09.335 CC lib/ftl/ftl_p2l.o 00:03:09.335 LIB libspdk_scsi.a 00:03:09.335 SO libspdk_scsi.so.9.0 00:03:09.335 CC lib/nvmf/mdns_server.o 00:03:09.335 CC lib/nvmf/rdma.o 00:03:09.335 SYMLINK libspdk_scsi.so 00:03:09.335 CC lib/nvmf/auth.o 00:03:09.592 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.592 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.592 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.592 CC lib/iscsi/conn.o 00:03:09.592 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.592 CC lib/vhost/vhost.o 00:03:09.850 CC lib/vhost/vhost_rpc.o 00:03:09.850 CC lib/vhost/vhost_scsi.o 00:03:09.850 CC lib/vhost/vhost_blk.o 00:03:09.850 CC lib/vhost/rte_vhost_user.o 00:03:09.850 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.110 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.110 CC lib/iscsi/init_grp.o 00:03:10.368 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.368 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.368 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.368 CC lib/iscsi/iscsi.o 00:03:10.368 CC lib/iscsi/md5.o 00:03:10.368 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.368 CC lib/iscsi/param.o 00:03:10.626 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.626 CC lib/iscsi/portal_grp.o 00:03:10.626 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.626 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.626 CC lib/ftl/utils/ftl_conf.o 00:03:10.626 CC lib/iscsi/tgt_node.o 00:03:10.885 CC lib/iscsi/iscsi_subsystem.o 00:03:10.885 CC lib/ftl/utils/ftl_md.o 00:03:10.885 LIB libspdk_vhost.a 00:03:10.885 SO libspdk_vhost.so.8.0 00:03:10.885 CC lib/iscsi/iscsi_rpc.o 00:03:10.885 CC lib/ftl/utils/ftl_mempool.o 00:03:10.885 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.145 SYMLINK libspdk_vhost.so 00:03:11.145 CC lib/ftl/utils/ftl_property.o 00:03:11.145 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.145 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.145 CC lib/iscsi/task.o 00:03:11.145 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.403 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.403 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.403 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.403 LIB libspdk_nvmf.a 00:03:11.403 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.403 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.403 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.403 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.403 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.403 SO libspdk_nvmf.so.19.0 00:03:11.403 CC lib/ftl/base/ftl_base_dev.o 00:03:11.403 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.661 CC lib/ftl/ftl_trace.o 00:03:11.661 SYMLINK libspdk_nvmf.so 00:03:11.661 LIB libspdk_iscsi.a 00:03:11.919 LIB libspdk_ftl.a 00:03:11.919 SO libspdk_iscsi.so.8.0 00:03:11.919 SYMLINK libspdk_iscsi.so 00:03:12.176 SO libspdk_ftl.so.9.0 00:03:12.435 SYMLINK libspdk_ftl.so 00:03:12.693 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.951 CC module/keyring/linux/keyring.o 00:03:12.951 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.951 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.951 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.951 CC module/keyring/file/keyring.o 00:03:12.951 CC module/blob/bdev/blob_bdev.o 00:03:12.951 CC module/accel/error/accel_error.o 00:03:12.951 CC module/sock/posix/posix.o 00:03:12.951 CC module/sock/uring/uring.o 00:03:12.951 LIB libspdk_env_dpdk_rpc.a 00:03:12.951 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.951 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.951 CC module/keyring/file/keyring_rpc.o 00:03:12.951 CC module/keyring/linux/keyring_rpc.o 00:03:12.951 LIB libspdk_scheduler_gscheduler.a 00:03:12.951 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.209 CC module/accel/error/accel_error_rpc.o 00:03:13.209 LIB libspdk_scheduler_dynamic.a 00:03:13.209 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.209 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:13.209 SO libspdk_scheduler_dynamic.so.4.0 00:03:13.209 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.209 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.209 LIB libspdk_keyring_file.a 00:03:13.209 LIB libspdk_keyring_linux.a 00:03:13.209 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.209 LIB libspdk_blob_bdev.a 00:03:13.209 SO libspdk_keyring_file.so.1.0 00:03:13.209 SO libspdk_keyring_linux.so.1.0 00:03:13.209 SO libspdk_blob_bdev.so.11.0 00:03:13.209 CC module/accel/ioat/accel_ioat.o 00:03:13.209 LIB libspdk_accel_error.a 00:03:13.209 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.209 SYMLINK libspdk_keyring_linux.so 00:03:13.209 SYMLINK libspdk_keyring_file.so 00:03:13.209 SYMLINK libspdk_blob_bdev.so 00:03:13.209 SO libspdk_accel_error.so.2.0 00:03:13.209 SYMLINK libspdk_accel_error.so 00:03:13.467 CC module/accel/dsa/accel_dsa.o 00:03:13.467 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.467 CC module/accel/iaa/accel_iaa.o 00:03:13.467 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.467 LIB libspdk_accel_ioat.a 00:03:13.467 SO libspdk_accel_ioat.so.6.0 00:03:13.467 SYMLINK libspdk_accel_ioat.so 00:03:13.467 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.467 CC module/bdev/delay/vbdev_delay.o 00:03:13.467 LIB libspdk_accel_iaa.a 00:03:13.467 CC module/bdev/error/vbdev_error.o 00:03:13.739 SO libspdk_accel_iaa.so.3.0 00:03:13.739 LIB libspdk_accel_dsa.a 00:03:13.739 LIB libspdk_sock_posix.a 00:03:13.739 SO libspdk_accel_dsa.so.5.0 00:03:13.739 SO libspdk_sock_posix.so.6.0 00:03:13.739 CC module/bdev/gpt/gpt.o 00:03:13.739 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.739 LIB libspdk_sock_uring.a 00:03:13.739 CC module/bdev/malloc/bdev_malloc.o 00:03:13.739 SYMLINK libspdk_accel_iaa.so 00:03:13.739 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.739 SO libspdk_sock_uring.so.5.0 00:03:13.739 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.739 SYMLINK libspdk_sock_posix.so 00:03:13.739 SYMLINK libspdk_accel_dsa.so 00:03:13.739 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.739 SYMLINK libspdk_sock_uring.so 00:03:13.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.739 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.998 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.998 LIB libspdk_blobfs_bdev.a 00:03:13.998 LIB libspdk_bdev_delay.a 00:03:13.998 SO libspdk_blobfs_bdev.so.6.0 00:03:13.998 SO libspdk_bdev_delay.so.6.0 00:03:13.998 LIB libspdk_bdev_error.a 00:03:13.998 LIB libspdk_bdev_gpt.a 00:03:13.998 LIB libspdk_bdev_malloc.a 00:03:13.998 SYMLINK libspdk_blobfs_bdev.so 00:03:13.998 SYMLINK libspdk_bdev_delay.so 00:03:13.998 SO libspdk_bdev_error.so.6.0 00:03:13.998 SO libspdk_bdev_gpt.so.6.0 00:03:13.998 CC module/bdev/null/bdev_null.o 00:03:13.998 SO libspdk_bdev_malloc.so.6.0 00:03:14.256 CC module/bdev/nvme/bdev_nvme.o 00:03:14.256 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.256 SYMLINK libspdk_bdev_error.so 00:03:14.256 SYMLINK libspdk_bdev_gpt.so 00:03:14.256 SYMLINK libspdk_bdev_malloc.so 00:03:14.256 CC module/bdev/null/bdev_null_rpc.o 00:03:14.256 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.256 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.256 CC module/bdev/nvme/nvme_rpc.o 00:03:14.256 LIB libspdk_bdev_lvol.a 00:03:14.256 CC module/bdev/raid/bdev_raid.o 00:03:14.256 CC module/bdev/split/vbdev_split.o 00:03:14.256 SO libspdk_bdev_lvol.so.6.0 00:03:14.256 SYMLINK libspdk_bdev_lvol.so 00:03:14.256 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.256 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.256 CC module/bdev/nvme/vbdev_opal.o 00:03:14.256 LIB libspdk_bdev_null.a 00:03:14.514 SO libspdk_bdev_null.so.6.0 00:03:14.514 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.514 LIB libspdk_bdev_passthru.a 00:03:14.514 SYMLINK libspdk_bdev_null.so 00:03:14.514 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.514 CC module/bdev/raid/raid0.o 00:03:14.514 SO libspdk_bdev_passthru.so.6.0 00:03:14.514 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.514 LIB libspdk_bdev_split.a 00:03:14.514 SO libspdk_bdev_split.so.6.0 00:03:14.514 SYMLINK libspdk_bdev_passthru.so 00:03:14.514 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.514 SYMLINK libspdk_bdev_split.so 00:03:14.514 CC module/bdev/raid/raid1.o 00:03:14.772 CC module/bdev/raid/concat.o 00:03:14.772 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.772 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.054 CC module/bdev/ftl/bdev_ftl.o 00:03:15.054 CC module/bdev/uring/bdev_uring.o 00:03:15.054 CC module/bdev/uring/bdev_uring_rpc.o 00:03:15.054 CC module/bdev/aio/bdev_aio.o 00:03:15.054 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.054 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.054 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.054 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.054 LIB libspdk_bdev_zone_block.a 00:03:15.054 SO libspdk_bdev_zone_block.so.6.0 00:03:15.323 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.323 LIB libspdk_bdev_raid.a 00:03:15.323 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.323 SYMLINK libspdk_bdev_zone_block.so 00:03:15.323 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.323 LIB libspdk_bdev_uring.a 00:03:15.323 SO libspdk_bdev_raid.so.6.0 00:03:15.323 SO libspdk_bdev_uring.so.6.0 00:03:15.323 LIB libspdk_bdev_ftl.a 00:03:15.323 LIB libspdk_bdev_iscsi.a 00:03:15.323 SO libspdk_bdev_iscsi.so.6.0 00:03:15.323 SO libspdk_bdev_ftl.so.6.0 00:03:15.323 SYMLINK libspdk_bdev_raid.so 00:03:15.323 SYMLINK libspdk_bdev_uring.so 00:03:15.323 LIB libspdk_bdev_aio.a 00:03:15.323 SO libspdk_bdev_aio.so.6.0 00:03:15.323 SYMLINK libspdk_bdev_iscsi.so 00:03:15.323 SYMLINK libspdk_bdev_ftl.so 00:03:15.580 SYMLINK libspdk_bdev_aio.so 00:03:15.580 LIB libspdk_bdev_virtio.a 00:03:15.580 SO libspdk_bdev_virtio.so.6.0 00:03:15.580 SYMLINK libspdk_bdev_virtio.so 00:03:16.511 LIB libspdk_bdev_nvme.a 00:03:16.512 SO libspdk_bdev_nvme.so.7.0 00:03:16.512 SYMLINK libspdk_bdev_nvme.so 00:03:17.077 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.077 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.077 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.077 CC module/event/subsystems/vmd/vmd.o 00:03:17.077 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.077 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.077 CC module/event/subsystems/keyring/keyring.o 00:03:17.077 CC module/event/subsystems/sock/sock.o 00:03:17.334 LIB libspdk_event_scheduler.a 00:03:17.334 LIB libspdk_event_vhost_blk.a 00:03:17.334 LIB libspdk_event_keyring.a 00:03:17.334 LIB libspdk_event_vmd.a 00:03:17.334 SO libspdk_event_scheduler.so.4.0 00:03:17.335 SO libspdk_event_vhost_blk.so.3.0 00:03:17.335 LIB libspdk_event_sock.a 00:03:17.335 LIB libspdk_event_iobuf.a 00:03:17.335 SO libspdk_event_keyring.so.1.0 00:03:17.335 SO libspdk_event_vmd.so.6.0 00:03:17.335 SO libspdk_event_sock.so.5.0 00:03:17.335 SO libspdk_event_iobuf.so.3.0 00:03:17.335 SYMLINK libspdk_event_vhost_blk.so 00:03:17.335 SYMLINK libspdk_event_keyring.so 00:03:17.335 SYMLINK libspdk_event_scheduler.so 00:03:17.335 SYMLINK libspdk_event_vmd.so 00:03:17.335 SYMLINK libspdk_event_sock.so 00:03:17.335 SYMLINK libspdk_event_iobuf.so 00:03:17.593 CC module/event/subsystems/accel/accel.o 00:03:17.851 LIB libspdk_event_accel.a 00:03:17.851 SO libspdk_event_accel.so.6.0 00:03:18.133 SYMLINK libspdk_event_accel.so 00:03:18.390 CC module/event/subsystems/bdev/bdev.o 00:03:18.390 LIB libspdk_event_bdev.a 00:03:18.648 SO libspdk_event_bdev.so.6.0 00:03:18.648 SYMLINK libspdk_event_bdev.so 00:03:18.906 CC module/event/subsystems/scsi/scsi.o 00:03:18.906 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.906 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.906 CC module/event/subsystems/ublk/ublk.o 00:03:18.906 CC module/event/subsystems/nbd/nbd.o 00:03:18.906 LIB libspdk_event_ublk.a 00:03:18.906 LIB libspdk_event_nbd.a 00:03:18.906 LIB libspdk_event_scsi.a 00:03:19.163 SO libspdk_event_ublk.so.3.0 00:03:19.163 SO libspdk_event_nbd.so.6.0 00:03:19.163 SO libspdk_event_scsi.so.6.0 00:03:19.163 SYMLINK libspdk_event_nbd.so 00:03:19.163 LIB libspdk_event_nvmf.a 00:03:19.163 SYMLINK libspdk_event_scsi.so 00:03:19.163 SYMLINK libspdk_event_ublk.so 00:03:19.163 SO libspdk_event_nvmf.so.6.0 00:03:19.163 SYMLINK libspdk_event_nvmf.so 00:03:19.421 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.421 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.678 LIB libspdk_event_vhost_scsi.a 00:03:19.678 LIB libspdk_event_iscsi.a 00:03:19.678 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.678 SO libspdk_event_iscsi.so.6.0 00:03:19.678 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.678 SYMLINK libspdk_event_iscsi.so 00:03:19.936 SO libspdk.so.6.0 00:03:19.936 SYMLINK libspdk.so 00:03:20.193 CXX app/trace/trace.o 00:03:20.193 CC app/trace_record/trace_record.o 00:03:20.193 TEST_HEADER include/spdk/accel.h 00:03:20.193 TEST_HEADER include/spdk/accel_module.h 00:03:20.193 TEST_HEADER include/spdk/assert.h 00:03:20.193 TEST_HEADER include/spdk/barrier.h 00:03:20.193 TEST_HEADER include/spdk/base64.h 00:03:20.193 TEST_HEADER include/spdk/bdev.h 00:03:20.193 TEST_HEADER include/spdk/bdev_module.h 00:03:20.193 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.193 TEST_HEADER include/spdk/bit_array.h 00:03:20.193 TEST_HEADER include/spdk/bit_pool.h 00:03:20.193 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.193 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.193 TEST_HEADER include/spdk/blobfs.h 00:03:20.193 TEST_HEADER include/spdk/blob.h 00:03:20.193 TEST_HEADER include/spdk/conf.h 00:03:20.193 TEST_HEADER include/spdk/config.h 00:03:20.193 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.193 TEST_HEADER include/spdk/cpuset.h 00:03:20.193 CC app/nvmf_tgt/nvmf_main.o 00:03:20.193 TEST_HEADER include/spdk/crc16.h 00:03:20.193 TEST_HEADER include/spdk/crc32.h 00:03:20.193 TEST_HEADER include/spdk/crc64.h 00:03:20.193 TEST_HEADER include/spdk/dif.h 00:03:20.193 TEST_HEADER include/spdk/dma.h 00:03:20.193 TEST_HEADER include/spdk/endian.h 00:03:20.193 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.193 TEST_HEADER include/spdk/env.h 00:03:20.193 TEST_HEADER include/spdk/event.h 00:03:20.193 TEST_HEADER include/spdk/fd_group.h 00:03:20.193 TEST_HEADER include/spdk/fd.h 00:03:20.193 TEST_HEADER include/spdk/file.h 00:03:20.193 TEST_HEADER include/spdk/ftl.h 00:03:20.193 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.193 TEST_HEADER include/spdk/hexlify.h 00:03:20.193 TEST_HEADER include/spdk/histogram_data.h 00:03:20.193 CC test/thread/poller_perf/poller_perf.o 00:03:20.193 TEST_HEADER include/spdk/idxd.h 00:03:20.193 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.193 TEST_HEADER include/spdk/init.h 00:03:20.193 TEST_HEADER include/spdk/ioat.h 00:03:20.193 CC examples/util/zipf/zipf.o 00:03:20.193 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.193 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.193 TEST_HEADER include/spdk/json.h 00:03:20.193 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.193 TEST_HEADER include/spdk/keyring.h 00:03:20.193 CC examples/ioat/perf/perf.o 00:03:20.193 TEST_HEADER include/spdk/keyring_module.h 00:03:20.193 TEST_HEADER include/spdk/likely.h 00:03:20.193 TEST_HEADER include/spdk/log.h 00:03:20.193 TEST_HEADER include/spdk/lvol.h 00:03:20.193 TEST_HEADER include/spdk/memory.h 00:03:20.193 TEST_HEADER include/spdk/mmio.h 00:03:20.193 TEST_HEADER include/spdk/nbd.h 00:03:20.193 TEST_HEADER include/spdk/net.h 00:03:20.193 TEST_HEADER include/spdk/notify.h 00:03:20.193 CC test/app/bdev_svc/bdev_svc.o 00:03:20.193 TEST_HEADER include/spdk/nvme.h 00:03:20.193 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.193 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.193 CC test/dma/test_dma/test_dma.o 00:03:20.193 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.193 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.193 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.193 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.193 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.193 TEST_HEADER include/spdk/nvmf.h 00:03:20.193 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.193 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.193 TEST_HEADER include/spdk/opal.h 00:03:20.193 TEST_HEADER include/spdk/opal_spec.h 00:03:20.193 TEST_HEADER include/spdk/pci_ids.h 00:03:20.450 TEST_HEADER include/spdk/pipe.h 00:03:20.450 TEST_HEADER include/spdk/queue.h 00:03:20.450 TEST_HEADER include/spdk/reduce.h 00:03:20.450 TEST_HEADER include/spdk/rpc.h 00:03:20.450 TEST_HEADER include/spdk/scheduler.h 00:03:20.450 TEST_HEADER include/spdk/scsi.h 00:03:20.450 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.450 TEST_HEADER include/spdk/sock.h 00:03:20.450 TEST_HEADER include/spdk/stdinc.h 00:03:20.450 TEST_HEADER include/spdk/string.h 00:03:20.450 TEST_HEADER include/spdk/thread.h 00:03:20.450 TEST_HEADER include/spdk/trace.h 00:03:20.450 TEST_HEADER include/spdk/trace_parser.h 00:03:20.450 TEST_HEADER include/spdk/tree.h 00:03:20.450 TEST_HEADER include/spdk/ublk.h 00:03:20.450 TEST_HEADER include/spdk/util.h 00:03:20.450 TEST_HEADER include/spdk/uuid.h 00:03:20.450 TEST_HEADER include/spdk/version.h 00:03:20.450 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.450 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.450 TEST_HEADER include/spdk/vhost.h 00:03:20.450 TEST_HEADER include/spdk/vmd.h 00:03:20.450 TEST_HEADER include/spdk/xor.h 00:03:20.450 TEST_HEADER include/spdk/zipf.h 00:03:20.450 CXX test/cpp_headers/accel.o 00:03:20.450 LINK nvmf_tgt 00:03:20.450 LINK interrupt_tgt 00:03:20.450 LINK poller_perf 00:03:20.450 LINK zipf 00:03:20.450 LINK spdk_trace_record 00:03:20.450 LINK bdev_svc 00:03:20.450 LINK ioat_perf 00:03:20.450 CXX test/cpp_headers/accel_module.o 00:03:20.450 LINK spdk_trace 00:03:20.450 CXX test/cpp_headers/assert.o 00:03:20.450 CXX test/cpp_headers/barrier.o 00:03:20.707 CXX test/cpp_headers/base64.o 00:03:20.707 LINK test_dma 00:03:20.707 CC examples/ioat/verify/verify.o 00:03:20.707 CXX test/cpp_headers/bdev.o 00:03:20.707 CXX test/cpp_headers/bdev_module.o 00:03:20.707 CC examples/sock/hello_world/hello_sock.o 00:03:20.707 CC examples/thread/thread/thread_ex.o 00:03:20.964 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.964 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.964 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.964 CC examples/idxd/perf/perf.o 00:03:20.964 LINK verify 00:03:20.964 CXX test/cpp_headers/bdev_zone.o 00:03:20.964 LINK hello_sock 00:03:20.964 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.964 LINK lsvmd 00:03:20.964 LINK thread 00:03:20.964 LINK iscsi_tgt 00:03:21.221 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.221 CXX test/cpp_headers/bit_array.o 00:03:21.221 CC app/spdk_tgt/spdk_tgt.o 00:03:21.221 LINK idxd_perf 00:03:21.221 LINK nvme_fuzz 00:03:21.221 CC examples/vmd/led/led.o 00:03:21.478 CC test/event/event_perf/event_perf.o 00:03:21.478 CC test/event/reactor/reactor.o 00:03:21.478 CXX test/cpp_headers/bit_pool.o 00:03:21.478 CC test/event/reactor_perf/reactor_perf.o 00:03:21.478 LINK led 00:03:21.478 LINK spdk_tgt 00:03:21.478 LINK reactor 00:03:21.478 CC test/event/app_repeat/app_repeat.o 00:03:21.478 LINK event_perf 00:03:21.478 LINK reactor_perf 00:03:21.478 CXX test/cpp_headers/blob_bdev.o 00:03:21.478 CC test/event/scheduler/scheduler.o 00:03:21.737 LINK app_repeat 00:03:21.737 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.737 CC examples/nvme/hello_world/hello_world.o 00:03:21.737 CC app/spdk_lspci/spdk_lspci.o 00:03:21.737 LINK mem_callbacks 00:03:21.737 LINK scheduler 00:03:21.737 CC examples/accel/perf/accel_perf.o 00:03:21.996 CC examples/blob/hello_world/hello_blob.o 00:03:21.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.996 LINK spdk_lspci 00:03:21.996 CXX test/cpp_headers/blobfs.o 00:03:21.996 CC app/spdk_nvme_perf/perf.o 00:03:21.996 CXX test/cpp_headers/blob.o 00:03:21.996 LINK hello_world 00:03:21.996 CC test/env/vtophys/vtophys.o 00:03:21.996 LINK hello_blob 00:03:22.254 CXX test/cpp_headers/conf.o 00:03:22.254 CC app/spdk_nvme_identify/identify.o 00:03:22.254 LINK vtophys 00:03:22.254 CC test/nvme/aer/aer.o 00:03:22.254 CC examples/nvme/reconnect/reconnect.o 00:03:22.254 LINK vhost_fuzz 00:03:22.254 LINK accel_perf 00:03:22.512 CXX test/cpp_headers/config.o 00:03:22.512 CXX test/cpp_headers/cpuset.o 00:03:22.512 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.512 CC examples/blob/cli/blobcli.o 00:03:22.512 CC test/nvme/reset/reset.o 00:03:22.512 LINK aer 00:03:22.512 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.512 CXX test/cpp_headers/crc16.o 00:03:22.512 LINK iscsi_fuzz 00:03:22.768 LINK reconnect 00:03:22.768 LINK env_dpdk_post_init 00:03:22.769 LINK spdk_nvme_perf 00:03:22.769 CXX test/cpp_headers/crc32.o 00:03:22.769 LINK spdk_nvme_discover 00:03:22.769 LINK reset 00:03:22.769 CC app/spdk_top/spdk_top.o 00:03:22.769 CC test/env/memory/memory_ut.o 00:03:23.027 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.027 LINK blobcli 00:03:23.027 CC test/app/histogram_perf/histogram_perf.o 00:03:23.027 CXX test/cpp_headers/crc64.o 00:03:23.027 LINK spdk_nvme_identify 00:03:23.027 CC examples/nvme/arbitration/arbitration.o 00:03:23.027 CC examples/nvme/hotplug/hotplug.o 00:03:23.027 CC test/nvme/sgl/sgl.o 00:03:23.027 LINK histogram_perf 00:03:23.027 CXX test/cpp_headers/dif.o 00:03:23.285 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.285 CXX test/cpp_headers/dma.o 00:03:23.285 LINK hotplug 00:03:23.285 CC test/app/jsoncat/jsoncat.o 00:03:23.285 LINK sgl 00:03:23.285 LINK nvme_manage 00:03:23.285 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.285 LINK arbitration 00:03:23.544 LINK cmb_copy 00:03:23.544 CXX test/cpp_headers/endian.o 00:03:23.544 LINK jsoncat 00:03:23.544 CXX test/cpp_headers/env_dpdk.o 00:03:23.544 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.544 CC test/nvme/e2edp/nvme_dp.o 00:03:23.544 CC test/rpc_client/rpc_client_test.o 00:03:23.544 LINK hello_bdev 00:03:23.544 CC examples/nvme/abort/abort.o 00:03:23.802 LINK spdk_top 00:03:23.802 CC test/app/stub/stub.o 00:03:23.802 CXX test/cpp_headers/env.o 00:03:23.802 CC test/accel/dif/dif.o 00:03:23.802 LINK rpc_client_test 00:03:23.802 LINK nvme_dp 00:03:23.802 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.087 LINK stub 00:03:24.087 CXX test/cpp_headers/event.o 00:03:24.087 LINK memory_ut 00:03:24.087 CC app/vhost/vhost.o 00:03:24.087 LINK abort 00:03:24.087 LINK pmr_persistence 00:03:24.087 CXX test/cpp_headers/fd_group.o 00:03:24.087 CC test/nvme/overhead/overhead.o 00:03:24.087 CC test/blobfs/mkfs/mkfs.o 00:03:24.345 LINK dif 00:03:24.345 LINK vhost 00:03:24.345 CC test/env/pci/pci_ut.o 00:03:24.345 CXX test/cpp_headers/fd.o 00:03:24.345 CXX test/cpp_headers/file.o 00:03:24.345 CC app/spdk_dd/spdk_dd.o 00:03:24.345 LINK bdevperf 00:03:24.345 CC test/lvol/esnap/esnap.o 00:03:24.345 LINK mkfs 00:03:24.345 LINK overhead 00:03:24.345 CXX test/cpp_headers/ftl.o 00:03:24.603 CC test/nvme/err_injection/err_injection.o 00:03:24.603 CC app/fio/nvme/fio_plugin.o 00:03:24.603 CC test/bdev/bdevio/bdevio.o 00:03:24.603 CXX test/cpp_headers/gpt_spec.o 00:03:24.603 LINK pci_ut 00:03:24.603 LINK err_injection 00:03:24.603 CC test/nvme/startup/startup.o 00:03:24.862 CC app/fio/bdev/fio_plugin.o 00:03:24.862 CC examples/nvmf/nvmf/nvmf.o 00:03:24.862 LINK spdk_dd 00:03:24.862 CXX test/cpp_headers/hexlify.o 00:03:24.862 LINK startup 00:03:24.862 CXX test/cpp_headers/histogram_data.o 00:03:24.862 CC test/nvme/reserve/reserve.o 00:03:25.120 CXX test/cpp_headers/idxd.o 00:03:25.120 CXX test/cpp_headers/idxd_spec.o 00:03:25.120 LINK bdevio 00:03:25.120 LINK nvmf 00:03:25.120 CC test/nvme/simple_copy/simple_copy.o 00:03:25.120 LINK reserve 00:03:25.120 LINK spdk_nvme 00:03:25.120 CXX test/cpp_headers/init.o 00:03:25.120 CC test/nvme/connect_stress/connect_stress.o 00:03:25.120 LINK spdk_bdev 00:03:25.379 CXX test/cpp_headers/ioat.o 00:03:25.379 CXX test/cpp_headers/ioat_spec.o 00:03:25.379 CC test/nvme/boot_partition/boot_partition.o 00:03:25.379 CXX test/cpp_headers/iscsi_spec.o 00:03:25.379 CC test/nvme/compliance/nvme_compliance.o 00:03:25.379 LINK simple_copy 00:03:25.379 LINK connect_stress 00:03:25.379 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.379 CXX test/cpp_headers/json.o 00:03:25.379 LINK boot_partition 00:03:25.638 CC test/nvme/fdp/fdp.o 00:03:25.638 CXX test/cpp_headers/jsonrpc.o 00:03:25.638 CC test/nvme/cuse/cuse.o 00:03:25.638 CXX test/cpp_headers/keyring.o 00:03:25.638 CXX test/cpp_headers/keyring_module.o 00:03:25.638 CXX test/cpp_headers/likely.o 00:03:25.638 LINK fused_ordering 00:03:25.638 LINK doorbell_aers 00:03:25.638 LINK nvme_compliance 00:03:25.638 CXX test/cpp_headers/log.o 00:03:25.638 CXX test/cpp_headers/lvol.o 00:03:25.897 CXX test/cpp_headers/memory.o 00:03:25.897 CXX test/cpp_headers/mmio.o 00:03:25.897 CXX test/cpp_headers/nbd.o 00:03:25.897 CXX test/cpp_headers/net.o 00:03:25.897 CXX test/cpp_headers/notify.o 00:03:25.897 CXX test/cpp_headers/nvme.o 00:03:25.897 LINK fdp 00:03:25.897 CXX test/cpp_headers/nvme_intel.o 00:03:25.897 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.897 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.897 CXX test/cpp_headers/nvme_spec.o 00:03:25.897 CXX test/cpp_headers/nvme_zns.o 00:03:25.897 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.897 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.155 CXX test/cpp_headers/nvmf.o 00:03:26.155 CXX test/cpp_headers/nvmf_spec.o 00:03:26.155 CXX test/cpp_headers/nvmf_transport.o 00:03:26.155 CXX test/cpp_headers/opal.o 00:03:26.155 CXX test/cpp_headers/opal_spec.o 00:03:26.155 CXX test/cpp_headers/pci_ids.o 00:03:26.155 CXX test/cpp_headers/pipe.o 00:03:26.155 CXX test/cpp_headers/queue.o 00:03:26.155 CXX test/cpp_headers/reduce.o 00:03:26.155 CXX test/cpp_headers/rpc.o 00:03:26.155 CXX test/cpp_headers/scheduler.o 00:03:26.413 CXX test/cpp_headers/scsi.o 00:03:26.413 CXX test/cpp_headers/scsi_spec.o 00:03:26.413 CXX test/cpp_headers/sock.o 00:03:26.413 CXX test/cpp_headers/stdinc.o 00:03:26.413 CXX test/cpp_headers/string.o 00:03:26.413 CXX test/cpp_headers/thread.o 00:03:26.413 CXX test/cpp_headers/trace.o 00:03:26.413 CXX test/cpp_headers/trace_parser.o 00:03:26.413 CXX test/cpp_headers/tree.o 00:03:26.413 CXX test/cpp_headers/ublk.o 00:03:26.413 CXX test/cpp_headers/util.o 00:03:26.413 CXX test/cpp_headers/uuid.o 00:03:26.413 CXX test/cpp_headers/version.o 00:03:26.671 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.671 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.671 CXX test/cpp_headers/vhost.o 00:03:26.671 CXX test/cpp_headers/vmd.o 00:03:26.671 CXX test/cpp_headers/xor.o 00:03:26.671 CXX test/cpp_headers/zipf.o 00:03:26.929 LINK cuse 00:03:30.213 LINK esnap 00:03:30.213 00:03:30.213 real 1m4.576s 00:03:30.213 user 6m20.371s 00:03:30.213 sys 1m38.845s 00:03:30.213 22:29:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:30.213 22:29:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.213 ************************************ 00:03:30.213 END TEST make 00:03:30.213 ************************************ 00:03:30.213 22:29:47 -- common/autotest_common.sh@1142 -- $ return 0 00:03:30.213 22:29:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.213 22:29:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.213 22:29:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.213 22:29:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.213 22:29:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.213 22:29:47 -- pm/common@44 -- $ pid=5135 00:03:30.213 22:29:47 -- pm/common@50 -- $ kill -TERM 5135 00:03:30.213 22:29:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.213 22:29:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.213 22:29:47 -- pm/common@44 -- $ pid=5137 00:03:30.213 22:29:47 -- pm/common@50 -- $ kill -TERM 5137 00:03:30.213 22:29:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:30.213 22:29:47 -- nvmf/common.sh@7 -- # uname -s 00:03:30.213 22:29:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.213 22:29:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.213 22:29:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.213 22:29:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.213 22:29:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.213 22:29:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.213 22:29:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.213 22:29:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.213 22:29:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.213 22:29:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.213 22:29:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:03:30.213 22:29:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:03:30.213 22:29:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.213 22:29:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.213 22:29:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:30.213 22:29:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.213 22:29:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:30.213 22:29:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.213 22:29:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.213 22:29:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.213 22:29:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.213 22:29:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.213 22:29:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.213 22:29:47 -- paths/export.sh@5 -- # export PATH 00:03:30.213 22:29:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.213 22:29:47 -- nvmf/common.sh@47 -- # : 0 00:03:30.213 22:29:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:30.213 22:29:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:30.213 22:29:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.213 22:29:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.213 22:29:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.213 22:29:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:30.213 22:29:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:30.213 22:29:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:30.213 22:29:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.213 22:29:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.213 22:29:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.213 22:29:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.213 22:29:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:30.213 22:29:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.213 22:29:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:30.213 22:29:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.213 22:29:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.213 22:29:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.213 22:29:48 -- spdk/autotest.sh@48 -- # udevadm_pid=52765 00:03:30.213 22:29:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.213 22:29:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.213 22:29:48 -- pm/common@17 -- # local monitor 00:03:30.213 22:29:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.213 22:29:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.213 22:29:48 -- pm/common@25 -- # sleep 1 00:03:30.213 22:29:48 -- pm/common@21 -- # date +%s 00:03:30.213 22:29:48 -- pm/common@21 -- # date +%s 00:03:30.213 22:29:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721082588 00:03:30.213 22:29:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721082588 00:03:30.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721082588_collect-cpu-load.pm.log 00:03:30.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721082588_collect-vmstat.pm.log 00:03:31.404 22:29:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.404 22:29:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.404 22:29:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.404 22:29:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.404 22:29:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.404 22:29:49 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:31.404 22:29:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.404 22:29:49 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:31.404 22:29:49 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:31.404 22:29:49 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:31.404 22:29:49 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:31.404 22:29:49 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:31.404 22:29:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.404 22:29:49 -- common/autotest_common.sh@1455 -- # uname 00:03:31.404 22:29:49 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:31.404 22:29:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.404 22:29:49 -- common/autotest_common.sh@1475 -- # uname 00:03:31.404 22:29:49 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:31.404 22:29:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:31.404 22:29:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:31.404 22:29:49 -- spdk/autotest.sh@72 -- # hash lcov 00:03:31.404 22:29:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:31.404 22:29:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:31.404 --rc lcov_branch_coverage=1 00:03:31.404 --rc lcov_function_coverage=1 00:03:31.404 --rc genhtml_branch_coverage=1 00:03:31.404 --rc genhtml_function_coverage=1 00:03:31.404 --rc genhtml_legend=1 00:03:31.404 --rc geninfo_all_blocks=1 00:03:31.404 ' 00:03:31.404 22:29:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:31.404 --rc lcov_branch_coverage=1 00:03:31.404 --rc lcov_function_coverage=1 00:03:31.404 --rc genhtml_branch_coverage=1 00:03:31.404 --rc genhtml_function_coverage=1 00:03:31.404 --rc genhtml_legend=1 00:03:31.404 --rc geninfo_all_blocks=1 00:03:31.404 ' 00:03:31.404 22:29:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:31.404 --rc lcov_branch_coverage=1 00:03:31.404 --rc lcov_function_coverage=1 00:03:31.404 --rc genhtml_branch_coverage=1 00:03:31.404 --rc genhtml_function_coverage=1 00:03:31.404 --rc genhtml_legend=1 00:03:31.404 --rc geninfo_all_blocks=1 00:03:31.404 --no-external' 00:03:31.404 22:29:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:31.404 --rc lcov_branch_coverage=1 00:03:31.404 --rc lcov_function_coverage=1 00:03:31.404 --rc genhtml_branch_coverage=1 00:03:31.404 --rc genhtml_function_coverage=1 00:03:31.404 --rc genhtml_legend=1 00:03:31.404 --rc geninfo_all_blocks=1 00:03:31.404 --no-external' 00:03:31.404 22:29:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:31.404 lcov: LCOV version 1.14 00:03:31.404 22:29:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.291 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.291 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:56.397 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:56.397 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:56.398 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:56.398 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:59.736 22:30:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:59.736 22:30:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.736 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:03:59.736 22:30:16 -- spdk/autotest.sh@91 -- # rm -f 00:03:59.736 22:30:16 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.994 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:59.994 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:59.994 22:30:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:59.994 22:30:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.994 22:30:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.994 22:30:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.994 22:30:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.994 22:30:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.994 22:30:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.994 22:30:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.994 22:30:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:59.994 22:30:17 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:59.994 22:30:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.994 22:30:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:59.994 22:30:17 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:59.994 22:30:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.994 22:30:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:59.994 22:30:17 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:59.994 22:30:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.994 22:30:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.994 22:30:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:59.994 22:30:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.994 22:30:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.994 22:30:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:59.994 22:30:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:59.994 22:30:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.994 No valid GPT data, bailing 00:03:59.994 22:30:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.994 22:30:17 -- scripts/common.sh@391 -- # pt= 00:03:59.994 22:30:17 -- scripts/common.sh@392 -- # return 1 00:03:59.994 22:30:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.994 1+0 records in 00:03:59.994 1+0 records out 00:03:59.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467554 s, 224 MB/s 00:03:59.994 22:30:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.994 22:30:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.994 22:30:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:59.994 22:30:17 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:59.994 22:30:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.252 No valid GPT data, bailing 00:04:00.252 22:30:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.252 22:30:17 -- scripts/common.sh@391 -- # pt= 00:04:00.252 22:30:17 -- scripts/common.sh@392 -- # return 1 00:04:00.252 22:30:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.252 1+0 records in 00:04:00.252 1+0 records out 00:04:00.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0033143 s, 316 MB/s 00:04:00.252 22:30:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.252 22:30:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:00.252 22:30:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:00.252 22:30:17 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:00.252 22:30:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:00.252 No valid GPT data, bailing 00:04:00.252 22:30:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:00.252 22:30:17 -- scripts/common.sh@391 -- # pt= 00:04:00.252 22:30:17 -- scripts/common.sh@392 -- # return 1 00:04:00.252 22:30:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:00.252 1+0 records in 00:04:00.252 1+0 records out 00:04:00.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425695 s, 246 MB/s 00:04:00.252 22:30:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.252 22:30:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:00.252 22:30:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:00.252 22:30:17 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:00.252 22:30:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:00.252 No valid GPT data, bailing 00:04:00.252 22:30:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:00.252 22:30:18 -- scripts/common.sh@391 -- # pt= 00:04:00.252 22:30:18 -- scripts/common.sh@392 -- # return 1 00:04:00.252 22:30:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:00.252 1+0 records in 00:04:00.252 1+0 records out 00:04:00.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391757 s, 268 MB/s 00:04:00.253 22:30:18 -- spdk/autotest.sh@118 -- # sync 00:04:00.253 22:30:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.253 22:30:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.253 22:30:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.151 22:30:19 -- spdk/autotest.sh@124 -- # uname -s 00:04:02.151 22:30:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:02.151 22:30:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:02.151 22:30:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.151 22:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.151 22:30:19 -- common/autotest_common.sh@10 -- # set +x 00:04:02.151 ************************************ 00:04:02.151 START TEST setup.sh 00:04:02.151 ************************************ 00:04:02.151 22:30:19 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:02.409 * Looking for test storage... 00:04:02.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.409 22:30:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:02.409 22:30:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:02.409 22:30:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:02.409 22:30:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.409 22:30:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.409 22:30:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 ************************************ 00:04:02.409 START TEST acl 00:04:02.409 ************************************ 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:02.409 * Looking for test storage... 00:04:02.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.409 22:30:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:02.409 22:30:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:02.409 22:30:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.409 22:30:20 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.343 22:30:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:03.343 22:30:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:03.343 22:30:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.343 22:30:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:03.343 22:30:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.343 22:30:20 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.908 Hugepages 00:04:03.908 node hugesize free / total 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.908 00:04:03.908 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.908 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:04.165 22:30:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:04.165 22:30:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.165 22:30:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.165 22:30:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:04.165 ************************************ 00:04:04.165 START TEST denied 00:04:04.165 ************************************ 00:04:04.165 22:30:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:04.165 22:30:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:04.165 22:30:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:04.165 22:30:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.165 22:30:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:04.165 22:30:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.096 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.096 22:30:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.677 00:04:05.677 real 0m1.530s 00:04:05.677 user 0m0.591s 00:04:05.677 sys 0m0.882s 00:04:05.677 22:30:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.677 ************************************ 00:04:05.677 END TEST denied 00:04:05.677 ************************************ 00:04:05.677 22:30:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:05.677 22:30:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:05.677 22:30:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:05.677 22:30:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.677 22:30:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.677 22:30:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.677 ************************************ 00:04:05.677 START TEST allowed 00:04:05.677 ************************************ 00:04:05.677 22:30:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:05.677 22:30:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:05.677 22:30:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:05.677 22:30:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:05.677 22:30:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.677 22:30:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.611 22:30:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.546 00:04:07.546 real 0m1.620s 00:04:07.546 user 0m0.709s 00:04:07.546 sys 0m0.897s 00:04:07.546 22:30:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.546 ************************************ 00:04:07.546 22:30:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:07.546 END TEST allowed 00:04:07.546 ************************************ 00:04:07.546 22:30:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:07.546 ************************************ 00:04:07.546 END TEST acl 00:04:07.546 ************************************ 00:04:07.546 00:04:07.546 real 0m5.060s 00:04:07.546 user 0m2.157s 00:04:07.546 sys 0m2.835s 00:04:07.546 22:30:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.546 22:30:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.546 22:30:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.546 22:30:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.546 22:30:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.546 22:30:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.546 22:30:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.546 ************************************ 00:04:07.546 START TEST hugepages 00:04:07.546 ************************************ 00:04:07.546 22:30:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.546 * Looking for test storage... 00:04:07.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 6008868 kB' 'MemAvailable: 7389212 kB' 'Buffers: 2436 kB' 'Cached: 1594596 kB' 'SwapCached: 0 kB' 'Active: 435292 kB' 'Inactive: 1265684 kB' 'Active(anon): 114432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 105868 kB' 'Mapped: 48788 kB' 'Shmem: 10488 kB' 'KReclaimable: 61476 kB' 'Slab: 132692 kB' 'SReclaimable: 61476 kB' 'SUnreclaim: 71216 kB' 'KernelStack: 6188 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412444 kB' 'Committed_AS: 334124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.546 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.547 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.548 22:30:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:07.548 22:30:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.548 22:30:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.548 22:30:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.548 ************************************ 00:04:07.548 START TEST default_setup 00:04:07.548 ************************************ 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.548 22:30:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.513 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.513 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8113292 kB' 'MemAvailable: 9493472 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452288 kB' 'Inactive: 1265692 kB' 'Active(anon): 131428 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132396 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71260 kB' 'KernelStack: 6224 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.513 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.514 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8112544 kB' 'MemAvailable: 9492728 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452260 kB' 'Inactive: 1265696 kB' 'Active(anon): 131400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122596 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132388 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71252 kB' 'KernelStack: 6144 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 350768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.515 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8112564 kB' 'MemAvailable: 9492748 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452232 kB' 'Inactive: 1265696 kB' 'Active(anon): 131372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122528 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132400 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6160 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.516 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.517 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.549 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.550 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:08.551 nr_hugepages=1024 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.551 resv_hugepages=0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.551 surplus_hugepages=0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.551 anon_hugepages=0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8113032 kB' 'MemAvailable: 9493220 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 451808 kB' 'Inactive: 1265700 kB' 'Active(anon): 130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132396 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71260 kB' 'KernelStack: 6192 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.551 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.552 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.812 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8113032 kB' 'MemUsed: 4128952 kB' 'SwapCached: 0 kB' 'Active: 451808 kB' 'Inactive: 1265700 kB' 'Active(anon): 130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1597024 kB' 'Mapped: 48732 kB' 'AnonPages: 122424 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61136 kB' 'Slab: 132396 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.814 node0=1024 expecting 1024 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.814 00:04:08.814 real 0m1.030s 00:04:08.814 user 0m0.452s 00:04:08.814 sys 0m0.529s 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.814 22:30:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:08.814 ************************************ 00:04:08.814 END TEST default_setup 00:04:08.814 ************************************ 00:04:08.814 22:30:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.814 22:30:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:08.814 22:30:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.814 22:30:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.814 22:30:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.814 ************************************ 00:04:08.814 START TEST per_node_1G_alloc 00:04:08.814 ************************************ 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.814 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.075 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.075 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.075 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9163868 kB' 'MemAvailable: 10544056 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452340 kB' 'Inactive: 1265700 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132436 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 6148 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.076 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9163616 kB' 'MemAvailable: 10543804 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452196 kB' 'Inactive: 1265700 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122496 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132440 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 6208 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.077 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.078 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9164212 kB' 'MemAvailable: 10544400 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452176 kB' 'Inactive: 1265700 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132440 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 6176 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.079 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.080 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.341 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.342 nr_hugepages=512 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:09.342 resv_hugepages=0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.342 surplus_hugepages=0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.342 anon_hugepages=0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9164580 kB' 'MemAvailable: 10544768 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 451944 kB' 'Inactive: 1265700 kB' 'Active(anon): 131084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132436 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 6160 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.342 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9164840 kB' 'MemUsed: 3077144 kB' 'SwapCached: 0 kB' 'Active: 452172 kB' 'Inactive: 1265700 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1597024 kB' 'Mapped: 48732 kB' 'AnonPages: 122540 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61136 kB' 'Slab: 132420 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.346 node0=512 expecting 512 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.346 00:04:09.346 real 0m0.542s 00:04:09.346 user 0m0.264s 00:04:09.346 sys 0m0.316s 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.346 22:30:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.346 ************************************ 00:04:09.346 END TEST per_node_1G_alloc 00:04:09.346 ************************************ 00:04:09.346 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.346 22:30:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.346 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.346 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.346 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.346 ************************************ 00:04:09.346 START TEST even_2G_alloc 00:04:09.346 ************************************ 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.346 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.347 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:09.347 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.347 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.607 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.607 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8121124 kB' 'MemAvailable: 9501312 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452356 kB' 'Inactive: 1265700 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122628 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132336 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71200 kB' 'KernelStack: 6192 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.607 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.608 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8121124 kB' 'MemAvailable: 9501312 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452132 kB' 'Inactive: 1265700 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132332 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71196 kB' 'KernelStack: 6192 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.871 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.872 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8121052 kB' 'MemAvailable: 9501240 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1265700 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132332 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71196 kB' 'KernelStack: 6176 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.873 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.874 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.875 nr_hugepages=1024 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.875 resv_hugepages=0 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.875 surplus_hugepages=0 00:04:09.875 anon_hugepages=0 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8121496 kB' 'MemAvailable: 9501684 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452108 kB' 'Inactive: 1265700 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132332 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71196 kB' 'KernelStack: 6176 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.875 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.876 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8121496 kB' 'MemUsed: 4120488 kB' 'SwapCached: 0 kB' 'Active: 452100 kB' 'Inactive: 1265700 kB' 'Active(anon): 131240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1597024 kB' 'Mapped: 48732 kB' 'AnonPages: 122376 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61136 kB' 'Slab: 132332 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.877 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.878 node0=1024 expecting 1024 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.878 00:04:09.878 real 0m0.542s 00:04:09.878 user 0m0.258s 00:04:09.878 sys 0m0.315s 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.878 22:30:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.878 ************************************ 00:04:09.878 END TEST even_2G_alloc 00:04:09.878 ************************************ 00:04:09.878 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.878 22:30:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:09.878 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.878 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.878 22:30:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.878 ************************************ 00:04:09.878 START TEST odd_alloc 00:04:09.878 ************************************ 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.878 22:30:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.401 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.401 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8117108 kB' 'MemAvailable: 9497296 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452556 kB' 'Inactive: 1265700 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132308 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71172 kB' 'KernelStack: 6192 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.401 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8117108 kB' 'MemAvailable: 9497296 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452148 kB' 'Inactive: 1265700 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122480 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132316 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6192 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.402 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.403 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8117108 kB' 'MemAvailable: 9497296 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452180 kB' 'Inactive: 1265700 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122480 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132316 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6192 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.404 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.405 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.406 nr_hugepages=1025 00:04:10.406 resv_hugepages=0 00:04:10.406 surplus_hugepages=0 00:04:10.406 anon_hugepages=0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8117108 kB' 'MemAvailable: 9497296 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 452096 kB' 'Inactive: 1265700 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122372 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132312 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71176 kB' 'KernelStack: 6176 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.406 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.407 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8117108 kB' 'MemUsed: 4124876 kB' 'SwapCached: 0 kB' 'Active: 452152 kB' 'Inactive: 1265700 kB' 'Active(anon): 131292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1597024 kB' 'Mapped: 48740 kB' 'AnonPages: 122472 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61136 kB' 'Slab: 132312 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.408 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:10.409 node0=1025 expecting 1025 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:10.409 00:04:10.409 real 0m0.604s 00:04:10.409 user 0m0.305s 00:04:10.409 sys 0m0.303s 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.409 22:30:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.409 ************************************ 00:04:10.409 END TEST odd_alloc 00:04:10.409 ************************************ 00:04:10.668 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.668 22:30:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.668 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.668 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.668 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.668 ************************************ 00:04:10.668 START TEST custom_alloc 00:04:10.668 ************************************ 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.668 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.929 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.929 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.929 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9171480 kB' 'MemAvailable: 10551672 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 1265704 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122660 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132316 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6164 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.930 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9171480 kB' 'MemAvailable: 10551672 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1265704 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132316 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6132 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.931 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.932 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.933 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9172172 kB' 'MemAvailable: 10552364 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1265704 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122568 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132316 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6176 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.196 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.197 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.198 nr_hugepages=512 00:04:11.198 resv_hugepages=0 00:04:11.198 surplus_hugepages=0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.198 anon_hugepages=0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9171920 kB' 'MemAvailable: 10552112 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 452020 kB' 'Inactive: 1265704 kB' 'Active(anon): 131160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122308 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61136 kB' 'Slab: 132304 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71168 kB' 'KernelStack: 6192 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 351136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.198 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.199 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9172364 kB' 'MemUsed: 3069620 kB' 'SwapCached: 0 kB' 'Active: 452272 kB' 'Inactive: 1265704 kB' 'Active(anon): 131412 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1597028 kB' 'Mapped: 48740 kB' 'AnonPages: 122552 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61136 kB' 'Slab: 132304 kB' 'SReclaimable: 61136 kB' 'SUnreclaim: 71168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.200 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.201 node0=512 expecting 512 00:04:11.201 ************************************ 00:04:11.201 END TEST custom_alloc 00:04:11.201 ************************************ 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.201 00:04:11.201 real 0m0.600s 00:04:11.201 user 0m0.261s 00:04:11.201 sys 0m0.344s 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.201 22:30:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.201 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.201 22:30:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:11.201 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.201 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.201 22:30:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.201 ************************************ 00:04:11.201 START TEST no_shrink_alloc 00:04:11.201 ************************************ 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.201 22:30:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.550 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.550 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8118880 kB' 'MemAvailable: 9499052 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 448484 kB' 'Inactive: 1265704 kB' 'Active(anon): 127624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118720 kB' 'Mapped: 48148 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132140 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71044 kB' 'KernelStack: 6052 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.550 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.814 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.815 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8119140 kB' 'MemAvailable: 9499312 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 448128 kB' 'Inactive: 1265704 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 47996 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132140 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71044 kB' 'KernelStack: 6096 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 335792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54436 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.816 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.817 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124652 kB' 'MemAvailable: 9504824 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 447856 kB' 'Inactive: 1265704 kB' 'Active(anon): 126996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118176 kB' 'Mapped: 47996 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132120 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6096 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54436 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.818 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.819 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.820 nr_hugepages=1024 00:04:11.820 resv_hugepages=0 00:04:11.820 surplus_hugepages=0 00:04:11.820 anon_hugepages=0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124652 kB' 'MemAvailable: 9504824 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 447900 kB' 'Inactive: 1265704 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118212 kB' 'Mapped: 47996 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132120 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6080 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54420 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.820 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.821 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124652 kB' 'MemUsed: 4117332 kB' 'SwapCached: 0 kB' 'Active: 447764 kB' 'Inactive: 1265704 kB' 'Active(anon): 126904 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1597028 kB' 'Mapped: 47996 kB' 'AnonPages: 118068 kB' 'Shmem: 10464 kB' 'KernelStack: 6096 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61096 kB' 'Slab: 132120 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.822 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.823 node0=1024 expecting 1024 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.823 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.081 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.081 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.348 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.348 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124332 kB' 'MemAvailable: 9504504 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 448752 kB' 'Inactive: 1265704 kB' 'Active(anon): 127892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132096 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 71000 kB' 'KernelStack: 6244 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.349 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124448 kB' 'MemAvailable: 9504620 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 448128 kB' 'Inactive: 1265704 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118204 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132092 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 70996 kB' 'KernelStack: 6128 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.350 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.351 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124876 kB' 'MemAvailable: 9505044 kB' 'Buffers: 2436 kB' 'Cached: 1594588 kB' 'SwapCached: 0 kB' 'Active: 447868 kB' 'Inactive: 1265700 kB' 'Active(anon): 127008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118096 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132092 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 70996 kB' 'KernelStack: 6128 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54436 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.352 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.353 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.353 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.353 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.353 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.354 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.355 nr_hugepages=1024 00:04:12.355 resv_hugepages=0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.355 surplus_hugepages=0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.355 anon_hugepages=0 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124876 kB' 'MemAvailable: 9505048 kB' 'Buffers: 2436 kB' 'Cached: 1594592 kB' 'SwapCached: 0 kB' 'Active: 447732 kB' 'Inactive: 1265704 kB' 'Active(anon): 126872 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117920 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61096 kB' 'Slab: 132092 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 70996 kB' 'KernelStack: 6128 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 336288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54436 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 8124876 kB' 'MemUsed: 4117108 kB' 'SwapCached: 0 kB' 'Active: 447920 kB' 'Inactive: 1265704 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1597028 kB' 'Mapped: 47968 kB' 'AnonPages: 118068 kB' 'Shmem: 10464 kB' 'KernelStack: 6112 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61096 kB' 'Slab: 132092 kB' 'SReclaimable: 61096 kB' 'SUnreclaim: 70996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.358 node0=1024 expecting 1024 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.358 ************************************ 00:04:12.358 END TEST no_shrink_alloc 00:04:12.358 ************************************ 00:04:12.358 00:04:12.358 real 0m1.152s 00:04:12.358 user 0m0.558s 00:04:12.358 sys 0m0.628s 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.358 22:30:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.358 22:30:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:12.358 22:30:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:12.358 ************************************ 00:04:12.358 END TEST hugepages 00:04:12.358 ************************************ 00:04:12.358 00:04:12.358 real 0m4.948s 00:04:12.358 user 0m2.261s 00:04:12.358 sys 0m2.724s 00:04:12.358 22:30:30 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.358 22:30:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.617 22:30:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:12.617 22:30:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:12.617 22:30:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.617 22:30:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.617 22:30:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.617 ************************************ 00:04:12.617 START TEST driver 00:04:12.617 ************************************ 00:04:12.617 22:30:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:12.617 * Looking for test storage... 00:04:12.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.617 22:30:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:12.617 22:30:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.617 22:30:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.184 22:30:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.184 22:30:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.184 22:30:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.184 22:30:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.184 ************************************ 00:04:13.184 START TEST guess_driver 00:04:13.184 ************************************ 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:13.184 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:13.184 Looking for driver=uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.184 22:30:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.120 22:30:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.685 00:04:14.685 real 0m1.533s 00:04:14.685 user 0m0.562s 00:04:14.685 sys 0m0.981s 00:04:14.685 22:30:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.685 22:30:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.685 ************************************ 00:04:14.685 END TEST guess_driver 00:04:14.685 ************************************ 00:04:14.685 22:30:32 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:14.685 00:04:14.685 real 0m2.290s 00:04:14.685 user 0m0.823s 00:04:14.685 sys 0m1.533s 00:04:14.685 22:30:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.685 22:30:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.685 ************************************ 00:04:14.685 END TEST driver 00:04:14.685 ************************************ 00:04:14.943 22:30:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.943 22:30:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:14.943 22:30:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.943 22:30:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.943 22:30:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.943 ************************************ 00:04:14.943 START TEST devices 00:04:14.943 ************************************ 00:04:14.943 22:30:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:14.943 * Looking for test storage... 00:04:14.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.943 22:30:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:14.943 22:30:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:14.943 22:30:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.943 22:30:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.880 22:30:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.881 22:30:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.881 No valid GPT data, bailing 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:15.881 No valid GPT data, bailing 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:15.881 No valid GPT data, bailing 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:15.881 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:15.881 22:30:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:16.140 No valid GPT data, bailing 00:04:16.140 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.140 22:30:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:16.140 22:30:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:16.140 22:30:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:16.140 22:30:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:16.140 22:30:33 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.140 22:30:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.140 22:30:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.140 22:30:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.140 22:30:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.140 ************************************ 00:04:16.140 START TEST nvme_mount 00:04:16.140 ************************************ 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.140 22:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.076 Creating new GPT entries in memory. 00:04:17.076 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.076 other utilities. 00:04:17.076 22:30:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.076 22:30:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.076 22:30:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.076 22:30:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.076 22:30:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:18.013 Creating new GPT entries in memory. 00:04:18.013 The operation has completed successfully. 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56944 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.013 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.272 22:30:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.272 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.530 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.530 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.530 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.530 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.788 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.788 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.047 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:19.047 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:19.047 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:19.047 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.047 22:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.305 22:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.305 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.305 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.563 22:30:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.819 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.077 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.077 00:04:20.077 real 0m4.153s 00:04:20.077 user 0m0.736s 00:04:20.077 sys 0m1.129s 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.077 22:30:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:20.077 ************************************ 00:04:20.077 END TEST nvme_mount 00:04:20.077 ************************************ 00:04:20.336 22:30:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:20.336 22:30:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:20.336 22:30:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.336 22:30:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.336 22:30:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:20.336 ************************************ 00:04:20.336 START TEST dm_mount 00:04:20.336 ************************************ 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.336 22:30:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:21.271 Creating new GPT entries in memory. 00:04:21.271 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.271 other utilities. 00:04:21.271 22:30:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.271 22:30:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.271 22:30:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.271 22:30:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.271 22:30:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:22.206 Creating new GPT entries in memory. 00:04:22.206 The operation has completed successfully. 00:04:22.206 22:30:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.206 22:30:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.206 22:30:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.206 22:30:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.206 22:30:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:23.581 The operation has completed successfully. 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57378 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.582 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.841 22:30:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.099 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.099 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.099 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.099 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.099 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.100 22:30:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.358 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.358 22:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.617 00:04:24.617 real 0m4.238s 00:04:24.617 user 0m0.461s 00:04:24.617 sys 0m0.736s 00:04:24.617 ************************************ 00:04:24.617 END TEST dm_mount 00:04:24.617 22:30:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.617 22:30:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.617 ************************************ 00:04:24.617 22:30:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:24.617 22:30:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:24.617 22:30:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:24.617 22:30:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.617 22:30:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.617 22:30:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.618 22:30:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.618 22:30:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.876 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.876 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.876 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.876 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.876 22:30:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.876 22:30:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.876 22:30:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.877 22:30:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.877 22:30:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.877 22:30:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.877 22:30:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.877 00:04:24.877 real 0m10.033s 00:04:24.877 user 0m1.895s 00:04:24.877 sys 0m2.501s 00:04:24.877 ************************************ 00:04:24.877 END TEST devices 00:04:24.877 ************************************ 00:04:24.877 22:30:42 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.877 22:30:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.877 22:30:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:24.877 ************************************ 00:04:24.877 END TEST setup.sh 00:04:24.877 ************************************ 00:04:24.877 00:04:24.877 real 0m22.643s 00:04:24.877 user 0m7.231s 00:04:24.877 sys 0m9.795s 00:04:24.877 22:30:42 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.877 22:30:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.877 22:30:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.877 22:30:42 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.811 Hugepages 00:04:25.811 node hugesize free / total 00:04:25.811 node0 1048576kB 0 / 0 00:04:25.811 node0 2048kB 2048 / 2048 00:04:25.811 00:04:25.811 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.811 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.811 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:25.811 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:25.811 22:30:43 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.811 22:30:43 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.811 22:30:43 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.811 22:30:43 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.636 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.636 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.636 22:30:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:27.670 22:30:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:27.670 22:30:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:27.670 22:30:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.670 22:30:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:27.670 22:30:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:27.670 22:30:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:27.670 22:30:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.670 22:30:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.670 22:30:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:27.670 22:30:45 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:27.670 22:30:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.670 22:30:45 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.236 Waiting for block devices as requested 00:04:28.236 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.236 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.236 22:30:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:28.236 22:30:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:28.236 22:30:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:28.236 22:30:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:28.236 22:30:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:28.236 22:30:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:28.236 22:30:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:28.236 22:30:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:28.495 22:30:46 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:28.495 22:30:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:28.495 22:30:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:28.495 22:30:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1557 -- # continue 00:04:28.495 22:30:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:28.495 22:30:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.495 22:30:46 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:28.495 22:30:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:28.495 22:30:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:28.495 22:30:46 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:28.495 22:30:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:28.495 22:30:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:28.495 22:30:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:28.495 22:30:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:28.495 22:30:46 -- common/autotest_common.sh@1557 -- # continue 00:04:28.495 22:30:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:28.495 22:30:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.495 22:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:28.495 22:30:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:28.495 22:30:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.495 22:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:28.495 22:30:46 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.321 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.321 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.321 22:30:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:29.321 22:30:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.321 22:30:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.321 22:30:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:29.321 22:30:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:29.322 22:30:47 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.322 22:30:47 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:29.322 22:30:47 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:29.322 22:30:47 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:29.322 22:30:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:29.322 22:30:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:29.322 22:30:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.322 22:30:47 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.322 22:30:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:29.322 22:30:47 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:29.322 22:30:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:29.322 22:30:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:29.322 22:30:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:29.322 22:30:47 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:29.322 22:30:47 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.322 22:30:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:29.322 22:30:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:29.322 22:30:47 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:29.322 22:30:47 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.322 22:30:47 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:29.322 22:30:47 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:29.322 22:30:47 -- common/autotest_common.sh@1593 -- # return 0 00:04:29.322 22:30:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:29.581 22:30:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:29.581 22:30:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:29.581 22:30:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:29.581 22:30:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:29.581 22:30:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.581 22:30:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.581 22:30:47 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:29.581 22:30:47 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:29.581 22:30:47 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:29.581 22:30:47 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.581 22:30:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.581 22:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.581 22:30:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.581 ************************************ 00:04:29.581 START TEST env 00:04:29.581 ************************************ 00:04:29.581 22:30:47 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.581 * Looking for test storage... 00:04:29.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:29.581 22:30:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.581 22:30:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.581 22:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.581 22:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.581 ************************************ 00:04:29.581 START TEST env_memory 00:04:29.581 ************************************ 00:04:29.581 22:30:47 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.581 00:04:29.581 00:04:29.581 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.581 http://cunit.sourceforge.net/ 00:04:29.581 00:04:29.581 00:04:29.581 Suite: memory 00:04:29.581 Test: alloc and free memory map ...[2024-07-15 22:30:47.319099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.581 passed 00:04:29.581 Test: mem map translation ...[2024-07-15 22:30:47.353004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.581 [2024-07-15 22:30:47.353057] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.581 [2024-07-15 22:30:47.353125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.581 [2024-07-15 22:30:47.353136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.581 passed 00:04:29.840 Test: mem map registration ...[2024-07-15 22:30:47.419514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:29.840 [2024-07-15 22:30:47.419576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:29.840 passed 00:04:29.840 Test: mem map adjacent registrations ...passed 00:04:29.840 00:04:29.840 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.840 suites 1 1 n/a 0 0 00:04:29.840 tests 4 4 4 0 0 00:04:29.840 asserts 152 152 152 0 n/a 00:04:29.840 00:04:29.840 Elapsed time = 0.222 seconds 00:04:29.840 00:04:29.840 real 0m0.240s 00:04:29.840 user 0m0.223s 00:04:29.840 sys 0m0.014s 00:04:29.840 22:30:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.840 22:30:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 ************************************ 00:04:29.840 END TEST env_memory 00:04:29.840 ************************************ 00:04:29.840 22:30:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:29.840 22:30:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:29.840 22:30:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.840 22:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.840 22:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 ************************************ 00:04:29.840 START TEST env_vtophys 00:04:29.840 ************************************ 00:04:29.840 22:30:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:29.840 EAL: lib.eal log level changed from notice to debug 00:04:29.840 EAL: Detected lcore 0 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 1 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 2 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 3 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 4 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 5 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 6 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 7 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 8 as core 0 on socket 0 00:04:29.840 EAL: Detected lcore 9 as core 0 on socket 0 00:04:29.840 EAL: Maximum logical cores by configuration: 128 00:04:29.840 EAL: Detected CPU lcores: 10 00:04:29.840 EAL: Detected NUMA nodes: 1 00:04:29.840 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:29.840 EAL: Detected shared linkage of DPDK 00:04:29.840 EAL: No shared files mode enabled, IPC will be disabled 00:04:29.840 EAL: Selected IOVA mode 'PA' 00:04:29.840 EAL: Probing VFIO support... 00:04:29.840 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:29.840 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:29.840 EAL: Ask a virtual area of 0x2e000 bytes 00:04:29.840 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:29.840 EAL: Setting up physically contiguous memory... 00:04:29.840 EAL: Setting maximum number of open files to 524288 00:04:29.840 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:29.840 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:29.840 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.840 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:29.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.840 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.840 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:29.840 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:29.840 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.840 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:29.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.840 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.840 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:29.840 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:29.840 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.840 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:29.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.840 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.840 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:29.840 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:29.840 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.840 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:29.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.840 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.840 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:29.840 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:29.841 EAL: Hugepages will be freed exactly as allocated. 00:04:29.841 EAL: No shared files mode enabled, IPC is disabled 00:04:29.841 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: TSC frequency is ~2200000 KHz 00:04:30.099 EAL: Main lcore 0 is ready (tid=7f5611c91a00;cpuset=[0]) 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 0 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.099 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.099 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.099 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.099 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.099 00:04:30.099 00:04:30.099 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.099 http://cunit.sourceforge.net/ 00:04:30.099 00:04:30.099 00:04:30.099 Suite: components_suite 00:04:30.099 Test: vtophys_malloc_test ...passed 00:04:30.099 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.099 EAL: Trying to obtain current memory policy. 00:04:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.099 EAL: Restoring previous memory policy: 4 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.099 EAL: request: mp_malloc_sync 00:04:30.099 EAL: No shared files mode enabled, IPC is disabled 00:04:30.099 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.100 EAL: request: mp_malloc_sync 00:04:30.100 EAL: No shared files mode enabled, IPC is disabled 00:04:30.100 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.100 EAL: Trying to obtain current memory policy. 00:04:30.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.100 EAL: Restoring previous memory policy: 4 00:04:30.100 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.100 EAL: request: mp_malloc_sync 00:04:30.100 EAL: No shared files mode enabled, IPC is disabled 00:04:30.100 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.100 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.100 EAL: request: mp_malloc_sync 00:04:30.100 EAL: No shared files mode enabled, IPC is disabled 00:04:30.100 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.100 EAL: Trying to obtain current memory policy. 00:04:30.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.358 EAL: Restoring previous memory policy: 4 00:04:30.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.358 EAL: request: mp_malloc_sync 00:04:30.358 EAL: No shared files mode enabled, IPC is disabled 00:04:30.358 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.358 EAL: request: mp_malloc_sync 00:04:30.358 EAL: No shared files mode enabled, IPC is disabled 00:04:30.358 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.358 EAL: Trying to obtain current memory policy. 00:04:30.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.671 EAL: Restoring previous memory policy: 4 00:04:30.671 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.671 EAL: request: mp_malloc_sync 00:04:30.671 EAL: No shared files mode enabled, IPC is disabled 00:04:30.671 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.671 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.930 EAL: request: mp_malloc_sync 00:04:30.930 EAL: No shared files mode enabled, IPC is disabled 00:04:30.930 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.930 EAL: Trying to obtain current memory policy. 00:04:30.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.188 EAL: Restoring previous memory policy: 4 00:04:31.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.188 EAL: request: mp_malloc_sync 00:04:31.188 EAL: No shared files mode enabled, IPC is disabled 00:04:31.188 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.705 EAL: request: mp_malloc_sync 00:04:31.705 EAL: No shared files mode enabled, IPC is disabled 00:04:31.705 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.705 passed 00:04:31.705 00:04:31.705 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.705 suites 1 1 n/a 0 0 00:04:31.705 tests 2 2 2 0 0 00:04:31.705 asserts 5379 5379 5379 0 n/a 00:04:31.705 00:04:31.705 Elapsed time = 1.741 seconds 00:04:31.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.705 EAL: request: mp_malloc_sync 00:04:31.705 EAL: No shared files mode enabled, IPC is disabled 00:04:31.705 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.705 EAL: No shared files mode enabled, IPC is disabled 00:04:31.705 EAL: No shared files mode enabled, IPC is disabled 00:04:31.705 EAL: No shared files mode enabled, IPC is disabled 00:04:31.705 00:04:31.705 real 0m1.942s 00:04:31.705 user 0m1.129s 00:04:31.705 sys 0m0.682s 00:04:31.705 22:30:49 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.705 22:30:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.705 ************************************ 00:04:31.705 END TEST env_vtophys 00:04:31.705 ************************************ 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.962 22:30:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.962 22:30:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.962 ************************************ 00:04:31.962 START TEST env_pci 00:04:31.962 ************************************ 00:04:31.962 22:30:49 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.962 00:04:31.962 00:04:31.962 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.962 http://cunit.sourceforge.net/ 00:04:31.962 00:04:31.962 00:04:31.962 Suite: pci 00:04:31.962 Test: pci_hook ...[2024-07-15 22:30:49.573380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58577 has claimed it 00:04:31.962 passed 00:04:31.962 00:04:31.962 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.962 suites 1 1 n/a 0 0 00:04:31.962 tests 1 1 1 0 0 00:04:31.962 asserts 25 25 25 0 n/a 00:04:31.962 00:04:31.962 Elapsed time = 0.002 seconds 00:04:31.962 EAL: Cannot find device (10000:00:01.0) 00:04:31.962 EAL: Failed to attach device on primary process 00:04:31.962 00:04:31.962 real 0m0.023s 00:04:31.962 user 0m0.010s 00:04:31.962 sys 0m0.013s 00:04:31.962 22:30:49 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.962 22:30:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.962 ************************************ 00:04:31.962 END TEST env_pci 00:04:31.962 ************************************ 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.962 22:30:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.962 22:30:49 env -- env/env.sh@15 -- # uname 00:04:31.962 22:30:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.962 22:30:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.962 22:30:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:31.962 22:30:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.962 22:30:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.962 ************************************ 00:04:31.962 START TEST env_dpdk_post_init 00:04:31.962 ************************************ 00:04:31.962 22:30:49 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.962 EAL: Detected CPU lcores: 10 00:04:31.962 EAL: Detected NUMA nodes: 1 00:04:31.962 EAL: Detected shared linkage of DPDK 00:04:31.962 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.962 EAL: Selected IOVA mode 'PA' 00:04:31.962 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:32.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:32.221 Starting DPDK initialization... 00:04:32.221 Starting SPDK post initialization... 00:04:32.221 SPDK NVMe probe 00:04:32.221 Attaching to 0000:00:10.0 00:04:32.221 Attaching to 0000:00:11.0 00:04:32.221 Attached to 0000:00:10.0 00:04:32.221 Attached to 0000:00:11.0 00:04:32.221 Cleaning up... 00:04:32.221 00:04:32.221 real 0m0.182s 00:04:32.221 user 0m0.043s 00:04:32.221 sys 0m0.039s 00:04:32.221 22:30:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.221 22:30:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.221 ************************************ 00:04:32.221 END TEST env_dpdk_post_init 00:04:32.221 ************************************ 00:04:32.221 22:30:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.221 22:30:49 env -- env/env.sh@26 -- # uname 00:04:32.221 22:30:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.221 22:30:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.221 22:30:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.221 22:30:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.221 22:30:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.221 ************************************ 00:04:32.221 START TEST env_mem_callbacks 00:04:32.221 ************************************ 00:04:32.221 22:30:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.221 EAL: Detected CPU lcores: 10 00:04:32.221 EAL: Detected NUMA nodes: 1 00:04:32.221 EAL: Detected shared linkage of DPDK 00:04:32.221 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.221 EAL: Selected IOVA mode 'PA' 00:04:32.221 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.221 00:04:32.221 00:04:32.221 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.221 http://cunit.sourceforge.net/ 00:04:32.221 00:04:32.221 00:04:32.221 Suite: memory 00:04:32.221 Test: test ... 00:04:32.221 register 0x200000200000 2097152 00:04:32.221 malloc 3145728 00:04:32.221 register 0x200000400000 4194304 00:04:32.221 buf 0x200000500000 len 3145728 PASSED 00:04:32.221 malloc 64 00:04:32.221 buf 0x2000004fff40 len 64 PASSED 00:04:32.221 malloc 4194304 00:04:32.221 register 0x200000800000 6291456 00:04:32.221 buf 0x200000a00000 len 4194304 PASSED 00:04:32.221 free 0x200000500000 3145728 00:04:32.221 free 0x2000004fff40 64 00:04:32.221 unregister 0x200000400000 4194304 PASSED 00:04:32.221 free 0x200000a00000 4194304 00:04:32.221 unregister 0x200000800000 6291456 PASSED 00:04:32.221 malloc 8388608 00:04:32.221 register 0x200000400000 10485760 00:04:32.221 buf 0x200000600000 len 8388608 PASSED 00:04:32.221 free 0x200000600000 8388608 00:04:32.221 unregister 0x200000400000 10485760 PASSED 00:04:32.221 passed 00:04:32.221 00:04:32.221 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.221 suites 1 1 n/a 0 0 00:04:32.221 tests 1 1 1 0 0 00:04:32.221 asserts 15 15 15 0 n/a 00:04:32.221 00:04:32.221 Elapsed time = 0.009 seconds 00:04:32.221 00:04:32.221 real 0m0.146s 00:04:32.221 user 0m0.018s 00:04:32.221 sys 0m0.027s 00:04:32.221 22:30:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.221 22:30:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.221 ************************************ 00:04:32.221 END TEST env_mem_callbacks 00:04:32.221 ************************************ 00:04:32.480 22:30:50 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.480 00:04:32.480 real 0m2.899s 00:04:32.480 user 0m1.556s 00:04:32.480 sys 0m0.994s 00:04:32.480 22:30:50 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.480 22:30:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.480 ************************************ 00:04:32.480 END TEST env 00:04:32.480 ************************************ 00:04:32.480 22:30:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.480 22:30:50 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.480 22:30:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.480 22:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.480 22:30:50 -- common/autotest_common.sh@10 -- # set +x 00:04:32.480 ************************************ 00:04:32.480 START TEST rpc 00:04:32.480 ************************************ 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.480 * Looking for test storage... 00:04:32.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.480 22:30:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58686 00:04:32.480 22:30:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.480 22:30:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.480 22:30:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58686 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 58686 ']' 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.480 22:30:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.480 [2024-07-15 22:30:50.276551] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:32.480 [2024-07-15 22:30:50.276653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58686 ] 00:04:32.738 [2024-07-15 22:30:50.411513] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.738 [2024-07-15 22:30:50.555852] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.738 [2024-07-15 22:30:50.555923] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58686' to capture a snapshot of events at runtime. 00:04:32.738 [2024-07-15 22:30:50.555934] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.738 [2024-07-15 22:30:50.555942] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.738 [2024-07-15 22:30:50.555949] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58686 for offline analysis/debug. 00:04:32.738 [2024-07-15 22:30:50.555983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.997 [2024-07-15 22:30:50.629610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:33.562 22:30:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.562 22:30:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:33.562 22:30:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.562 22:30:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.562 22:30:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.562 22:30:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.562 22:30:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.562 22:30:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.562 22:30:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 ************************************ 00:04:33.562 START TEST rpc_integrity 00:04:33.562 ************************************ 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.562 { 00:04:33.562 "name": "Malloc0", 00:04:33.562 "aliases": [ 00:04:33.562 "e2415a87-0e83-48d5-b8db-5e4ffba053d8" 00:04:33.562 ], 00:04:33.562 "product_name": "Malloc disk", 00:04:33.562 "block_size": 512, 00:04:33.562 "num_blocks": 16384, 00:04:33.562 "uuid": "e2415a87-0e83-48d5-b8db-5e4ffba053d8", 00:04:33.562 "assigned_rate_limits": { 00:04:33.562 "rw_ios_per_sec": 0, 00:04:33.562 "rw_mbytes_per_sec": 0, 00:04:33.562 "r_mbytes_per_sec": 0, 00:04:33.562 "w_mbytes_per_sec": 0 00:04:33.562 }, 00:04:33.562 "claimed": false, 00:04:33.562 "zoned": false, 00:04:33.562 "supported_io_types": { 00:04:33.562 "read": true, 00:04:33.562 "write": true, 00:04:33.562 "unmap": true, 00:04:33.562 "flush": true, 00:04:33.562 "reset": true, 00:04:33.562 "nvme_admin": false, 00:04:33.562 "nvme_io": false, 00:04:33.562 "nvme_io_md": false, 00:04:33.562 "write_zeroes": true, 00:04:33.562 "zcopy": true, 00:04:33.562 "get_zone_info": false, 00:04:33.562 "zone_management": false, 00:04:33.562 "zone_append": false, 00:04:33.562 "compare": false, 00:04:33.562 "compare_and_write": false, 00:04:33.562 "abort": true, 00:04:33.562 "seek_hole": false, 00:04:33.562 "seek_data": false, 00:04:33.562 "copy": true, 00:04:33.562 "nvme_iov_md": false 00:04:33.562 }, 00:04:33.562 "memory_domains": [ 00:04:33.562 { 00:04:33.562 "dma_device_id": "system", 00:04:33.562 "dma_device_type": 1 00:04:33.562 }, 00:04:33.562 { 00:04:33.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.562 "dma_device_type": 2 00:04:33.562 } 00:04:33.562 ], 00:04:33.562 "driver_specific": {} 00:04:33.562 } 00:04:33.562 ]' 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 [2024-07-15 22:30:51.372762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.562 [2024-07-15 22:30:51.372835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.562 [2024-07-15 22:30:51.372859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe224d0 00:04:33.562 [2024-07-15 22:30:51.372889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.562 [2024-07-15 22:30:51.375015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.562 [2024-07-15 22:30:51.375054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.562 Passthru0 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.562 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.562 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.821 { 00:04:33.821 "name": "Malloc0", 00:04:33.821 "aliases": [ 00:04:33.821 "e2415a87-0e83-48d5-b8db-5e4ffba053d8" 00:04:33.821 ], 00:04:33.821 "product_name": "Malloc disk", 00:04:33.821 "block_size": 512, 00:04:33.821 "num_blocks": 16384, 00:04:33.821 "uuid": "e2415a87-0e83-48d5-b8db-5e4ffba053d8", 00:04:33.821 "assigned_rate_limits": { 00:04:33.821 "rw_ios_per_sec": 0, 00:04:33.821 "rw_mbytes_per_sec": 0, 00:04:33.821 "r_mbytes_per_sec": 0, 00:04:33.821 "w_mbytes_per_sec": 0 00:04:33.821 }, 00:04:33.821 "claimed": true, 00:04:33.821 "claim_type": "exclusive_write", 00:04:33.821 "zoned": false, 00:04:33.821 "supported_io_types": { 00:04:33.821 "read": true, 00:04:33.821 "write": true, 00:04:33.821 "unmap": true, 00:04:33.821 "flush": true, 00:04:33.821 "reset": true, 00:04:33.821 "nvme_admin": false, 00:04:33.821 "nvme_io": false, 00:04:33.821 "nvme_io_md": false, 00:04:33.821 "write_zeroes": true, 00:04:33.821 "zcopy": true, 00:04:33.821 "get_zone_info": false, 00:04:33.821 "zone_management": false, 00:04:33.821 "zone_append": false, 00:04:33.821 "compare": false, 00:04:33.821 "compare_and_write": false, 00:04:33.821 "abort": true, 00:04:33.821 "seek_hole": false, 00:04:33.821 "seek_data": false, 00:04:33.821 "copy": true, 00:04:33.821 "nvme_iov_md": false 00:04:33.821 }, 00:04:33.821 "memory_domains": [ 00:04:33.821 { 00:04:33.821 "dma_device_id": "system", 00:04:33.821 "dma_device_type": 1 00:04:33.821 }, 00:04:33.821 { 00:04:33.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.821 "dma_device_type": 2 00:04:33.821 } 00:04:33.821 ], 00:04:33.821 "driver_specific": {} 00:04:33.821 }, 00:04:33.821 { 00:04:33.821 "name": "Passthru0", 00:04:33.821 "aliases": [ 00:04:33.821 "ee3c4b11-e823-5e34-b904-3f10689c67f1" 00:04:33.821 ], 00:04:33.821 "product_name": "passthru", 00:04:33.821 "block_size": 512, 00:04:33.821 "num_blocks": 16384, 00:04:33.821 "uuid": "ee3c4b11-e823-5e34-b904-3f10689c67f1", 00:04:33.821 "assigned_rate_limits": { 00:04:33.821 "rw_ios_per_sec": 0, 00:04:33.821 "rw_mbytes_per_sec": 0, 00:04:33.821 "r_mbytes_per_sec": 0, 00:04:33.821 "w_mbytes_per_sec": 0 00:04:33.821 }, 00:04:33.821 "claimed": false, 00:04:33.821 "zoned": false, 00:04:33.821 "supported_io_types": { 00:04:33.821 "read": true, 00:04:33.821 "write": true, 00:04:33.821 "unmap": true, 00:04:33.821 "flush": true, 00:04:33.821 "reset": true, 00:04:33.821 "nvme_admin": false, 00:04:33.821 "nvme_io": false, 00:04:33.821 "nvme_io_md": false, 00:04:33.821 "write_zeroes": true, 00:04:33.821 "zcopy": true, 00:04:33.821 "get_zone_info": false, 00:04:33.821 "zone_management": false, 00:04:33.821 "zone_append": false, 00:04:33.821 "compare": false, 00:04:33.821 "compare_and_write": false, 00:04:33.821 "abort": true, 00:04:33.821 "seek_hole": false, 00:04:33.821 "seek_data": false, 00:04:33.821 "copy": true, 00:04:33.821 "nvme_iov_md": false 00:04:33.821 }, 00:04:33.821 "memory_domains": [ 00:04:33.821 { 00:04:33.821 "dma_device_id": "system", 00:04:33.821 "dma_device_type": 1 00:04:33.821 }, 00:04:33.821 { 00:04:33.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.821 "dma_device_type": 2 00:04:33.821 } 00:04:33.821 ], 00:04:33.821 "driver_specific": { 00:04:33.821 "passthru": { 00:04:33.821 "name": "Passthru0", 00:04:33.821 "base_bdev_name": "Malloc0" 00:04:33.821 } 00:04:33.821 } 00:04:33.821 } 00:04:33.821 ]' 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.821 22:30:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.821 00:04:33.821 real 0m0.333s 00:04:33.821 user 0m0.220s 00:04:33.821 sys 0m0.042s 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 ************************************ 00:04:33.821 END TEST rpc_integrity 00:04:33.821 ************************************ 00:04:33.821 22:30:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:33.821 22:30:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.821 22:30:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.821 22:30:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.821 22:30:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 ************************************ 00:04:33.821 START TEST rpc_plugins 00:04:33.821 ************************************ 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:33.821 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.821 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.821 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.821 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.821 { 00:04:33.821 "name": "Malloc1", 00:04:33.821 "aliases": [ 00:04:33.821 "642c6ab2-52ef-414d-9edd-fe3cc41ddf19" 00:04:33.821 ], 00:04:33.821 "product_name": "Malloc disk", 00:04:33.821 "block_size": 4096, 00:04:33.821 "num_blocks": 256, 00:04:33.821 "uuid": "642c6ab2-52ef-414d-9edd-fe3cc41ddf19", 00:04:33.821 "assigned_rate_limits": { 00:04:33.821 "rw_ios_per_sec": 0, 00:04:33.821 "rw_mbytes_per_sec": 0, 00:04:33.821 "r_mbytes_per_sec": 0, 00:04:33.821 "w_mbytes_per_sec": 0 00:04:33.821 }, 00:04:33.821 "claimed": false, 00:04:33.821 "zoned": false, 00:04:33.821 "supported_io_types": { 00:04:33.821 "read": true, 00:04:33.821 "write": true, 00:04:33.821 "unmap": true, 00:04:33.821 "flush": true, 00:04:33.821 "reset": true, 00:04:33.821 "nvme_admin": false, 00:04:33.821 "nvme_io": false, 00:04:33.821 "nvme_io_md": false, 00:04:33.821 "write_zeroes": true, 00:04:33.821 "zcopy": true, 00:04:33.821 "get_zone_info": false, 00:04:33.821 "zone_management": false, 00:04:33.821 "zone_append": false, 00:04:33.821 "compare": false, 00:04:33.821 "compare_and_write": false, 00:04:33.822 "abort": true, 00:04:33.822 "seek_hole": false, 00:04:33.822 "seek_data": false, 00:04:33.822 "copy": true, 00:04:33.822 "nvme_iov_md": false 00:04:33.822 }, 00:04:33.822 "memory_domains": [ 00:04:33.822 { 00:04:33.822 "dma_device_id": "system", 00:04:33.822 "dma_device_type": 1 00:04:33.822 }, 00:04:33.822 { 00:04:33.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.822 "dma_device_type": 2 00:04:33.822 } 00:04:33.822 ], 00:04:33.822 "driver_specific": {} 00:04:33.822 } 00:04:33.822 ]' 00:04:33.822 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.080 22:30:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.080 00:04:34.080 real 0m0.164s 00:04:34.080 user 0m0.105s 00:04:34.080 sys 0m0.020s 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.080 ************************************ 00:04:34.080 22:30:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.080 END TEST rpc_plugins 00:04:34.080 ************************************ 00:04:34.080 22:30:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.080 22:30:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.080 22:30:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.080 22:30:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.080 22:30:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.080 ************************************ 00:04:34.080 START TEST rpc_trace_cmd_test 00:04:34.080 ************************************ 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.080 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.080 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58686", 00:04:34.080 "tpoint_group_mask": "0x8", 00:04:34.080 "iscsi_conn": { 00:04:34.080 "mask": "0x2", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "scsi": { 00:04:34.080 "mask": "0x4", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "bdev": { 00:04:34.080 "mask": "0x8", 00:04:34.080 "tpoint_mask": "0xffffffffffffffff" 00:04:34.080 }, 00:04:34.080 "nvmf_rdma": { 00:04:34.080 "mask": "0x10", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "nvmf_tcp": { 00:04:34.080 "mask": "0x20", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "ftl": { 00:04:34.080 "mask": "0x40", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "blobfs": { 00:04:34.080 "mask": "0x80", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "dsa": { 00:04:34.080 "mask": "0x200", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "thread": { 00:04:34.080 "mask": "0x400", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "nvme_pcie": { 00:04:34.080 "mask": "0x800", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "iaa": { 00:04:34.080 "mask": "0x1000", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "nvme_tcp": { 00:04:34.080 "mask": "0x2000", 00:04:34.080 "tpoint_mask": "0x0" 00:04:34.080 }, 00:04:34.080 "bdev_nvme": { 00:04:34.081 "mask": "0x4000", 00:04:34.081 "tpoint_mask": "0x0" 00:04:34.081 }, 00:04:34.081 "sock": { 00:04:34.081 "mask": "0x8000", 00:04:34.081 "tpoint_mask": "0x0" 00:04:34.081 } 00:04:34.081 }' 00:04:34.081 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.081 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:34.081 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.339 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.339 22:30:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.339 00:04:34.339 real 0m0.353s 00:04:34.339 user 0m0.308s 00:04:34.339 sys 0m0.034s 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.339 22:30:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.339 ************************************ 00:04:34.339 END TEST rpc_trace_cmd_test 00:04:34.339 ************************************ 00:04:34.598 22:30:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.598 22:30:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.598 22:30:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.598 22:30:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.598 22:30:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.598 22:30:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.598 22:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 ************************************ 00:04:34.598 START TEST rpc_daemon_integrity 00:04:34.598 ************************************ 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.598 { 00:04:34.598 "name": "Malloc2", 00:04:34.598 "aliases": [ 00:04:34.598 "0b5d3563-765c-4d1e-ab3a-c8ed600762ef" 00:04:34.598 ], 00:04:34.598 "product_name": "Malloc disk", 00:04:34.598 "block_size": 512, 00:04:34.598 "num_blocks": 16384, 00:04:34.598 "uuid": "0b5d3563-765c-4d1e-ab3a-c8ed600762ef", 00:04:34.598 "assigned_rate_limits": { 00:04:34.598 "rw_ios_per_sec": 0, 00:04:34.598 "rw_mbytes_per_sec": 0, 00:04:34.598 "r_mbytes_per_sec": 0, 00:04:34.598 "w_mbytes_per_sec": 0 00:04:34.598 }, 00:04:34.598 "claimed": false, 00:04:34.598 "zoned": false, 00:04:34.598 "supported_io_types": { 00:04:34.598 "read": true, 00:04:34.598 "write": true, 00:04:34.598 "unmap": true, 00:04:34.598 "flush": true, 00:04:34.598 "reset": true, 00:04:34.598 "nvme_admin": false, 00:04:34.598 "nvme_io": false, 00:04:34.598 "nvme_io_md": false, 00:04:34.598 "write_zeroes": true, 00:04:34.598 "zcopy": true, 00:04:34.598 "get_zone_info": false, 00:04:34.598 "zone_management": false, 00:04:34.598 "zone_append": false, 00:04:34.598 "compare": false, 00:04:34.598 "compare_and_write": false, 00:04:34.598 "abort": true, 00:04:34.598 "seek_hole": false, 00:04:34.598 "seek_data": false, 00:04:34.598 "copy": true, 00:04:34.598 "nvme_iov_md": false 00:04:34.598 }, 00:04:34.598 "memory_domains": [ 00:04:34.598 { 00:04:34.598 "dma_device_id": "system", 00:04:34.598 "dma_device_type": 1 00:04:34.598 }, 00:04:34.598 { 00:04:34.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.598 "dma_device_type": 2 00:04:34.598 } 00:04:34.598 ], 00:04:34.598 "driver_specific": {} 00:04:34.598 } 00:04:34.598 ]' 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 [2024-07-15 22:30:52.381708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.598 [2024-07-15 22:30:52.381795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.598 [2024-07-15 22:30:52.381818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda3a0 00:04:34.598 [2024-07-15 22:30:52.381829] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.598 [2024-07-15 22:30:52.383495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.598 [2024-07-15 22:30:52.383539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.598 Passthru0 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.598 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.598 { 00:04:34.598 "name": "Malloc2", 00:04:34.598 "aliases": [ 00:04:34.598 "0b5d3563-765c-4d1e-ab3a-c8ed600762ef" 00:04:34.598 ], 00:04:34.598 "product_name": "Malloc disk", 00:04:34.598 "block_size": 512, 00:04:34.598 "num_blocks": 16384, 00:04:34.598 "uuid": "0b5d3563-765c-4d1e-ab3a-c8ed600762ef", 00:04:34.598 "assigned_rate_limits": { 00:04:34.598 "rw_ios_per_sec": 0, 00:04:34.598 "rw_mbytes_per_sec": 0, 00:04:34.598 "r_mbytes_per_sec": 0, 00:04:34.598 "w_mbytes_per_sec": 0 00:04:34.598 }, 00:04:34.598 "claimed": true, 00:04:34.598 "claim_type": "exclusive_write", 00:04:34.598 "zoned": false, 00:04:34.598 "supported_io_types": { 00:04:34.598 "read": true, 00:04:34.598 "write": true, 00:04:34.598 "unmap": true, 00:04:34.598 "flush": true, 00:04:34.598 "reset": true, 00:04:34.598 "nvme_admin": false, 00:04:34.598 "nvme_io": false, 00:04:34.598 "nvme_io_md": false, 00:04:34.598 "write_zeroes": true, 00:04:34.598 "zcopy": true, 00:04:34.598 "get_zone_info": false, 00:04:34.598 "zone_management": false, 00:04:34.598 "zone_append": false, 00:04:34.598 "compare": false, 00:04:34.598 "compare_and_write": false, 00:04:34.598 "abort": true, 00:04:34.598 "seek_hole": false, 00:04:34.598 "seek_data": false, 00:04:34.598 "copy": true, 00:04:34.598 "nvme_iov_md": false 00:04:34.598 }, 00:04:34.598 "memory_domains": [ 00:04:34.598 { 00:04:34.598 "dma_device_id": "system", 00:04:34.598 "dma_device_type": 1 00:04:34.598 }, 00:04:34.598 { 00:04:34.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.598 "dma_device_type": 2 00:04:34.598 } 00:04:34.598 ], 00:04:34.598 "driver_specific": {} 00:04:34.598 }, 00:04:34.598 { 00:04:34.598 "name": "Passthru0", 00:04:34.598 "aliases": [ 00:04:34.598 "eacd9719-d6b4-509d-b03c-43cd16fac0cb" 00:04:34.598 ], 00:04:34.598 "product_name": "passthru", 00:04:34.598 "block_size": 512, 00:04:34.598 "num_blocks": 16384, 00:04:34.598 "uuid": "eacd9719-d6b4-509d-b03c-43cd16fac0cb", 00:04:34.598 "assigned_rate_limits": { 00:04:34.598 "rw_ios_per_sec": 0, 00:04:34.598 "rw_mbytes_per_sec": 0, 00:04:34.598 "r_mbytes_per_sec": 0, 00:04:34.598 "w_mbytes_per_sec": 0 00:04:34.598 }, 00:04:34.598 "claimed": false, 00:04:34.598 "zoned": false, 00:04:34.598 "supported_io_types": { 00:04:34.598 "read": true, 00:04:34.598 "write": true, 00:04:34.598 "unmap": true, 00:04:34.598 "flush": true, 00:04:34.598 "reset": true, 00:04:34.598 "nvme_admin": false, 00:04:34.598 "nvme_io": false, 00:04:34.598 "nvme_io_md": false, 00:04:34.598 "write_zeroes": true, 00:04:34.598 "zcopy": true, 00:04:34.598 "get_zone_info": false, 00:04:34.598 "zone_management": false, 00:04:34.598 "zone_append": false, 00:04:34.598 "compare": false, 00:04:34.598 "compare_and_write": false, 00:04:34.598 "abort": true, 00:04:34.598 "seek_hole": false, 00:04:34.598 "seek_data": false, 00:04:34.598 "copy": true, 00:04:34.598 "nvme_iov_md": false 00:04:34.598 }, 00:04:34.598 "memory_domains": [ 00:04:34.598 { 00:04:34.598 "dma_device_id": "system", 00:04:34.598 "dma_device_type": 1 00:04:34.598 }, 00:04:34.598 { 00:04:34.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.598 "dma_device_type": 2 00:04:34.598 } 00:04:34.598 ], 00:04:34.598 "driver_specific": { 00:04:34.598 "passthru": { 00:04:34.599 "name": "Passthru0", 00:04:34.599 "base_bdev_name": "Malloc2" 00:04:34.599 } 00:04:34.599 } 00:04:34.599 } 00:04:34.599 ]' 00:04:34.599 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.857 00:04:34.857 real 0m0.330s 00:04:34.857 user 0m0.223s 00:04:34.857 sys 0m0.038s 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.857 22:30:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 ************************************ 00:04:34.857 END TEST rpc_daemon_integrity 00:04:34.857 ************************************ 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.857 22:30:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.857 22:30:52 rpc -- rpc/rpc.sh@84 -- # killprocess 58686 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 58686 ']' 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@952 -- # kill -0 58686 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@953 -- # uname 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58686 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.857 killing process with pid 58686 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58686' 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@967 -- # kill 58686 00:04:34.857 22:30:52 rpc -- common/autotest_common.sh@972 -- # wait 58686 00:04:35.423 00:04:35.423 real 0m2.887s 00:04:35.423 user 0m3.714s 00:04:35.423 sys 0m0.755s 00:04:35.423 22:30:53 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.423 22:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.423 ************************************ 00:04:35.423 END TEST rpc 00:04:35.423 ************************************ 00:04:35.423 22:30:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.423 22:30:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.423 22:30:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.423 22:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.423 22:30:53 -- common/autotest_common.sh@10 -- # set +x 00:04:35.423 ************************************ 00:04:35.423 START TEST skip_rpc 00:04:35.423 ************************************ 00:04:35.423 22:30:53 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.423 * Looking for test storage... 00:04:35.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.423 22:30:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.423 22:30:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.423 22:30:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.423 22:30:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.423 22:30:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.424 22:30:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.424 ************************************ 00:04:35.424 START TEST skip_rpc 00:04:35.424 ************************************ 00:04:35.424 22:30:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:35.424 22:30:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58879 00:04:35.424 22:30:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.424 22:30:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.424 22:30:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.424 [2024-07-15 22:30:53.251323] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:35.424 [2024-07-15 22:30:53.251479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:04:35.681 [2024-07-15 22:30:53.393835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.938 [2024-07-15 22:30:53.578312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.938 [2024-07-15 22:30:53.659957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58879 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58879 ']' 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58879 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58879 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.231 killing process with pid 58879 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58879' 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58879 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58879 00:04:41.231 00:04:41.231 real 0m5.469s 00:04:41.231 user 0m4.945s 00:04:41.231 sys 0m0.418s 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.231 ************************************ 00:04:41.231 END TEST skip_rpc 00:04:41.231 ************************************ 00:04:41.231 22:30:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 22:30:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.231 22:30:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.231 22:30:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.231 22:30:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.231 22:30:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 ************************************ 00:04:41.231 START TEST skip_rpc_with_json 00:04:41.231 ************************************ 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58971 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58971 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58971 ']' 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.231 22:30:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 [2024-07-15 22:30:58.753018] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:41.231 [2024-07-15 22:30:58.753154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:04:41.231 [2024-07-15 22:30:58.894146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.231 [2024-07-15 22:30:59.049291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.491 [2024-07-15 22:30:59.109166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.057 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.057 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:42.057 22:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.058 [2024-07-15 22:30:59.818418] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.058 request: 00:04:42.058 { 00:04:42.058 "trtype": "tcp", 00:04:42.058 "method": "nvmf_get_transports", 00:04:42.058 "req_id": 1 00:04:42.058 } 00:04:42.058 Got JSON-RPC error response 00:04:42.058 response: 00:04:42.058 { 00:04:42.058 "code": -19, 00:04:42.058 "message": "No such device" 00:04:42.058 } 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.058 [2024-07-15 22:30:59.830522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.058 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.317 22:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.317 22:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.317 { 00:04:42.317 "subsystems": [ 00:04:42.317 { 00:04:42.317 "subsystem": "keyring", 00:04:42.317 "config": [] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "iobuf", 00:04:42.317 "config": [ 00:04:42.317 { 00:04:42.317 "method": "iobuf_set_options", 00:04:42.317 "params": { 00:04:42.317 "small_pool_count": 8192, 00:04:42.317 "large_pool_count": 1024, 00:04:42.317 "small_bufsize": 8192, 00:04:42.317 "large_bufsize": 135168 00:04:42.317 } 00:04:42.317 } 00:04:42.317 ] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "sock", 00:04:42.317 "config": [ 00:04:42.317 { 00:04:42.317 "method": "sock_set_default_impl", 00:04:42.317 "params": { 00:04:42.317 "impl_name": "uring" 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "sock_impl_set_options", 00:04:42.317 "params": { 00:04:42.317 "impl_name": "ssl", 00:04:42.317 "recv_buf_size": 4096, 00:04:42.317 "send_buf_size": 4096, 00:04:42.317 "enable_recv_pipe": true, 00:04:42.317 "enable_quickack": false, 00:04:42.317 "enable_placement_id": 0, 00:04:42.317 "enable_zerocopy_send_server": true, 00:04:42.317 "enable_zerocopy_send_client": false, 00:04:42.317 "zerocopy_threshold": 0, 00:04:42.317 "tls_version": 0, 00:04:42.317 "enable_ktls": false 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "sock_impl_set_options", 00:04:42.317 "params": { 00:04:42.317 "impl_name": "posix", 00:04:42.317 "recv_buf_size": 2097152, 00:04:42.317 "send_buf_size": 2097152, 00:04:42.317 "enable_recv_pipe": true, 00:04:42.317 "enable_quickack": false, 00:04:42.317 "enable_placement_id": 0, 00:04:42.317 "enable_zerocopy_send_server": true, 00:04:42.317 "enable_zerocopy_send_client": false, 00:04:42.317 "zerocopy_threshold": 0, 00:04:42.317 "tls_version": 0, 00:04:42.317 "enable_ktls": false 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "sock_impl_set_options", 00:04:42.317 "params": { 00:04:42.317 "impl_name": "uring", 00:04:42.317 "recv_buf_size": 2097152, 00:04:42.317 "send_buf_size": 2097152, 00:04:42.317 "enable_recv_pipe": true, 00:04:42.317 "enable_quickack": false, 00:04:42.317 "enable_placement_id": 0, 00:04:42.317 "enable_zerocopy_send_server": false, 00:04:42.317 "enable_zerocopy_send_client": false, 00:04:42.317 "zerocopy_threshold": 0, 00:04:42.317 "tls_version": 0, 00:04:42.317 "enable_ktls": false 00:04:42.317 } 00:04:42.317 } 00:04:42.317 ] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "vmd", 00:04:42.317 "config": [] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "accel", 00:04:42.317 "config": [ 00:04:42.317 { 00:04:42.317 "method": "accel_set_options", 00:04:42.317 "params": { 00:04:42.317 "small_cache_size": 128, 00:04:42.317 "large_cache_size": 16, 00:04:42.317 "task_count": 2048, 00:04:42.317 "sequence_count": 2048, 00:04:42.317 "buf_count": 2048 00:04:42.317 } 00:04:42.317 } 00:04:42.317 ] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "bdev", 00:04:42.317 "config": [ 00:04:42.317 { 00:04:42.317 "method": "bdev_set_options", 00:04:42.317 "params": { 00:04:42.317 "bdev_io_pool_size": 65535, 00:04:42.317 "bdev_io_cache_size": 256, 00:04:42.317 "bdev_auto_examine": true, 00:04:42.317 "iobuf_small_cache_size": 128, 00:04:42.317 "iobuf_large_cache_size": 16 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "bdev_raid_set_options", 00:04:42.317 "params": { 00:04:42.317 "process_window_size_kb": 1024 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "bdev_iscsi_set_options", 00:04:42.317 "params": { 00:04:42.317 "timeout_sec": 30 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "bdev_nvme_set_options", 00:04:42.317 "params": { 00:04:42.317 "action_on_timeout": "none", 00:04:42.317 "timeout_us": 0, 00:04:42.317 "timeout_admin_us": 0, 00:04:42.317 "keep_alive_timeout_ms": 10000, 00:04:42.317 "arbitration_burst": 0, 00:04:42.317 "low_priority_weight": 0, 00:04:42.317 "medium_priority_weight": 0, 00:04:42.317 "high_priority_weight": 0, 00:04:42.317 "nvme_adminq_poll_period_us": 10000, 00:04:42.317 "nvme_ioq_poll_period_us": 0, 00:04:42.317 "io_queue_requests": 0, 00:04:42.317 "delay_cmd_submit": true, 00:04:42.317 "transport_retry_count": 4, 00:04:42.317 "bdev_retry_count": 3, 00:04:42.317 "transport_ack_timeout": 0, 00:04:42.317 "ctrlr_loss_timeout_sec": 0, 00:04:42.317 "reconnect_delay_sec": 0, 00:04:42.317 "fast_io_fail_timeout_sec": 0, 00:04:42.317 "disable_auto_failback": false, 00:04:42.317 "generate_uuids": false, 00:04:42.317 "transport_tos": 0, 00:04:42.317 "nvme_error_stat": false, 00:04:42.317 "rdma_srq_size": 0, 00:04:42.317 "io_path_stat": false, 00:04:42.317 "allow_accel_sequence": false, 00:04:42.317 "rdma_max_cq_size": 0, 00:04:42.317 "rdma_cm_event_timeout_ms": 0, 00:04:42.317 "dhchap_digests": [ 00:04:42.317 "sha256", 00:04:42.317 "sha384", 00:04:42.317 "sha512" 00:04:42.317 ], 00:04:42.317 "dhchap_dhgroups": [ 00:04:42.317 "null", 00:04:42.317 "ffdhe2048", 00:04:42.317 "ffdhe3072", 00:04:42.317 "ffdhe4096", 00:04:42.317 "ffdhe6144", 00:04:42.317 "ffdhe8192" 00:04:42.317 ] 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "bdev_nvme_set_hotplug", 00:04:42.317 "params": { 00:04:42.317 "period_us": 100000, 00:04:42.317 "enable": false 00:04:42.317 } 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "method": "bdev_wait_for_examine" 00:04:42.317 } 00:04:42.317 ] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "scsi", 00:04:42.317 "config": null 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "scheduler", 00:04:42.317 "config": [ 00:04:42.317 { 00:04:42.317 "method": "framework_set_scheduler", 00:04:42.317 "params": { 00:04:42.317 "name": "static" 00:04:42.317 } 00:04:42.317 } 00:04:42.317 ] 00:04:42.317 }, 00:04:42.317 { 00:04:42.317 "subsystem": "vhost_scsi", 00:04:42.317 "config": [] 00:04:42.317 }, 00:04:42.317 { 00:04:42.318 "subsystem": "vhost_blk", 00:04:42.318 "config": [] 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "subsystem": "ublk", 00:04:42.318 "config": [] 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "subsystem": "nbd", 00:04:42.318 "config": [] 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "subsystem": "nvmf", 00:04:42.318 "config": [ 00:04:42.318 { 00:04:42.318 "method": "nvmf_set_config", 00:04:42.318 "params": { 00:04:42.318 "discovery_filter": "match_any", 00:04:42.318 "admin_cmd_passthru": { 00:04:42.318 "identify_ctrlr": false 00:04:42.318 } 00:04:42.318 } 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "method": "nvmf_set_max_subsystems", 00:04:42.318 "params": { 00:04:42.318 "max_subsystems": 1024 00:04:42.318 } 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "method": "nvmf_set_crdt", 00:04:42.318 "params": { 00:04:42.318 "crdt1": 0, 00:04:42.318 "crdt2": 0, 00:04:42.318 "crdt3": 0 00:04:42.318 } 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "method": "nvmf_create_transport", 00:04:42.318 "params": { 00:04:42.318 "trtype": "TCP", 00:04:42.318 "max_queue_depth": 128, 00:04:42.318 "max_io_qpairs_per_ctrlr": 127, 00:04:42.318 "in_capsule_data_size": 4096, 00:04:42.318 "max_io_size": 131072, 00:04:42.318 "io_unit_size": 131072, 00:04:42.318 "max_aq_depth": 128, 00:04:42.318 "num_shared_buffers": 511, 00:04:42.318 "buf_cache_size": 4294967295, 00:04:42.318 "dif_insert_or_strip": false, 00:04:42.318 "zcopy": false, 00:04:42.318 "c2h_success": true, 00:04:42.318 "sock_priority": 0, 00:04:42.318 "abort_timeout_sec": 1, 00:04:42.318 "ack_timeout": 0, 00:04:42.318 "data_wr_pool_size": 0 00:04:42.318 } 00:04:42.318 } 00:04:42.318 ] 00:04:42.318 }, 00:04:42.318 { 00:04:42.318 "subsystem": "iscsi", 00:04:42.318 "config": [ 00:04:42.318 { 00:04:42.318 "method": "iscsi_set_options", 00:04:42.318 "params": { 00:04:42.318 "node_base": "iqn.2016-06.io.spdk", 00:04:42.318 "max_sessions": 128, 00:04:42.318 "max_connections_per_session": 2, 00:04:42.318 "max_queue_depth": 64, 00:04:42.318 "default_time2wait": 2, 00:04:42.318 "default_time2retain": 20, 00:04:42.318 "first_burst_length": 8192, 00:04:42.318 "immediate_data": true, 00:04:42.318 "allow_duplicated_isid": false, 00:04:42.318 "error_recovery_level": 0, 00:04:42.318 "nop_timeout": 60, 00:04:42.318 "nop_in_interval": 30, 00:04:42.318 "disable_chap": false, 00:04:42.318 "require_chap": false, 00:04:42.318 "mutual_chap": false, 00:04:42.318 "chap_group": 0, 00:04:42.318 "max_large_datain_per_connection": 64, 00:04:42.318 "max_r2t_per_connection": 4, 00:04:42.318 "pdu_pool_size": 36864, 00:04:42.318 "immediate_data_pool_size": 16384, 00:04:42.318 "data_out_pool_size": 2048 00:04:42.318 } 00:04:42.318 } 00:04:42.318 ] 00:04:42.318 } 00:04:42.318 ] 00:04:42.318 } 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58971 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58971 ']' 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58971 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58971 00:04:42.318 killing process with pid 58971 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58971' 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58971 00:04:42.318 22:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58971 00:04:42.884 22:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58993 00:04:42.884 22:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.884 22:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58993 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58993 ']' 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58993 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58993 00:04:48.190 killing process with pid 58993 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58993' 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58993 00:04:48.190 22:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58993 00:04:48.448 22:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.448 22:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.448 ************************************ 00:04:48.448 END TEST skip_rpc_with_json 00:04:48.448 ************************************ 00:04:48.448 00:04:48.448 real 0m7.357s 00:04:48.448 user 0m7.139s 00:04:48.449 sys 0m0.686s 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.449 22:31:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.449 ************************************ 00:04:48.449 START TEST skip_rpc_with_delay 00:04:48.449 ************************************ 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.449 [2024-07-15 22:31:06.172756] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.449 [2024-07-15 22:31:06.172931] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:48.449 ************************************ 00:04:48.449 END TEST skip_rpc_with_delay 00:04:48.449 ************************************ 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:48.449 00:04:48.449 real 0m0.095s 00:04:48.449 user 0m0.059s 00:04:48.449 sys 0m0.035s 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.449 22:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.449 22:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.449 22:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.449 22:31:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.449 22:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.449 ************************************ 00:04:48.449 START TEST exit_on_failed_rpc_init 00:04:48.449 ************************************ 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59108 00:04:48.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59108 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59108 ']' 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.449 22:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.707 [2024-07-15 22:31:06.310132] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:48.707 [2024-07-15 22:31:06.310250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:04:48.707 [2024-07-15 22:31:06.443845] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.965 [2024-07-15 22:31:06.568110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.965 [2024-07-15 22:31:06.644396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.531 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.531 [2024-07-15 22:31:07.350863] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:49.531 [2024-07-15 22:31:07.350983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:04:49.789 [2024-07-15 22:31:07.493294] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.789 [2024-07-15 22:31:07.615902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.789 [2024-07-15 22:31:07.616013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:49.789 [2024-07-15 22:31:07.616031] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:49.789 [2024-07-15 22:31:07.616042] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59108 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59108 ']' 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59108 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59108 00:04:50.047 killing process with pid 59108 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59108' 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59108 00:04:50.047 22:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59108 00:04:50.305 ************************************ 00:04:50.305 END TEST exit_on_failed_rpc_init 00:04:50.305 ************************************ 00:04:50.305 00:04:50.305 real 0m1.868s 00:04:50.305 user 0m2.110s 00:04:50.305 sys 0m0.510s 00:04:50.305 22:31:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.305 22:31:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 22:31:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.564 22:31:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.564 00:04:50.564 real 0m15.112s 00:04:50.564 user 0m14.347s 00:04:50.564 sys 0m1.862s 00:04:50.564 22:31:08 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.564 ************************************ 00:04:50.564 END TEST skip_rpc 00:04:50.564 ************************************ 00:04:50.564 22:31:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 22:31:08 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.564 22:31:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.564 22:31:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.564 22:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.564 22:31:08 -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 ************************************ 00:04:50.564 START TEST rpc_client 00:04:50.564 ************************************ 00:04:50.564 22:31:08 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.564 * Looking for test storage... 00:04:50.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:50.564 22:31:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.564 OK 00:04:50.564 22:31:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.564 00:04:50.564 real 0m0.113s 00:04:50.564 user 0m0.048s 00:04:50.564 sys 0m0.069s 00:04:50.564 ************************************ 00:04:50.564 END TEST rpc_client 00:04:50.564 ************************************ 00:04:50.564 22:31:08 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.564 22:31:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 22:31:08 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.564 22:31:08 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.564 22:31:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.564 22:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.564 22:31:08 -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 ************************************ 00:04:50.564 START TEST json_config 00:04:50.564 ************************************ 00:04:50.564 22:31:08 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.823 22:31:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.823 22:31:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.823 22:31:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.823 22:31:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.823 22:31:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.823 22:31:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.823 22:31:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.823 22:31:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@47 -- # : 0 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.823 22:31:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:50.823 INFO: JSON configuration test init 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:50.823 22:31:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.824 Waiting for target to run... 00:04:50.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.824 22:31:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:50.824 22:31:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.824 22:31:08 json_config -- json_config/common.sh@10 -- # shift 00:04:50.824 22:31:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.824 22:31:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.824 22:31:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.824 22:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.824 22:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.824 22:31:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59244 00:04:50.824 22:31:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.824 22:31:08 json_config -- json_config/common.sh@25 -- # waitforlisten 59244 /var/tmp/spdk_tgt.sock 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 59244 ']' 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.824 22:31:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.824 22:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.824 [2024-07-15 22:31:08.546585] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:50.824 [2024-07-15 22:31:08.546957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:04:51.389 [2024-07-15 22:31:08.983945] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.389 [2024-07-15 22:31:09.062900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.647 22:31:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.647 22:31:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:51.647 22:31:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.647 00:04:51.647 22:31:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:51.647 22:31:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:51.647 22:31:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.647 22:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.905 22:31:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:51.905 22:31:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:51.905 22:31:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.905 22:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.905 22:31:09 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:51.905 22:31:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:51.905 22:31:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.162 [2024-07-15 22:31:09.796263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:52.162 22:31:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:52.162 22:31:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.162 22:31:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.162 22:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.425 22:31:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.425 22:31:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.425 22:31:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.425 22:31:10 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:52.425 22:31:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.425 22:31:10 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:52.690 22:31:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.690 22:31:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:52.690 22:31:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.690 22:31:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:52.690 22:31:10 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.690 22:31:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.947 MallocForNvmf0 00:04:52.947 22:31:10 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.947 22:31:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.205 MallocForNvmf1 00:04:53.205 22:31:10 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.205 22:31:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.464 [2024-07-15 22:31:11.107986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.464 22:31:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.464 22:31:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.722 22:31:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.722 22:31:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.722 22:31:11 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.722 22:31:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.979 22:31:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.979 22:31:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.236 [2024-07-15 22:31:11.976615] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:54.236 22:31:11 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:54.236 22:31:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.236 22:31:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.236 22:31:12 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:54.236 22:31:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.236 22:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.493 22:31:12 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:54.493 22:31:12 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.493 22:31:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.493 MallocBdevForConfigChangeCheck 00:04:54.493 22:31:12 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:54.493 22:31:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.493 22:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.751 22:31:12 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:54.751 22:31:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.010 INFO: shutting down applications... 00:04:55.010 22:31:12 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:55.010 22:31:12 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:55.010 22:31:12 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:55.010 22:31:12 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:55.010 22:31:12 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:55.268 Calling clear_iscsi_subsystem 00:04:55.268 Calling clear_nvmf_subsystem 00:04:55.268 Calling clear_nbd_subsystem 00:04:55.268 Calling clear_ublk_subsystem 00:04:55.268 Calling clear_vhost_blk_subsystem 00:04:55.268 Calling clear_vhost_scsi_subsystem 00:04:55.268 Calling clear_bdev_subsystem 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:55.268 22:31:13 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:55.833 22:31:13 json_config -- json_config/json_config.sh@345 -- # break 00:04:55.833 22:31:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:55.833 22:31:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:55.833 22:31:13 json_config -- json_config/common.sh@31 -- # local app=target 00:04:55.833 22:31:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:55.833 22:31:13 json_config -- json_config/common.sh@35 -- # [[ -n 59244 ]] 00:04:55.833 22:31:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59244 00:04:55.833 22:31:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:55.833 22:31:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.833 22:31:13 json_config -- json_config/common.sh@41 -- # kill -0 59244 00:04:55.833 22:31:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.399 22:31:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.399 22:31:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.399 22:31:13 json_config -- json_config/common.sh@41 -- # kill -0 59244 00:04:56.399 22:31:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.399 22:31:13 json_config -- json_config/common.sh@43 -- # break 00:04:56.399 SPDK target shutdown done 00:04:56.399 INFO: relaunching applications... 00:04:56.399 22:31:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.399 22:31:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.399 22:31:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:56.399 22:31:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.399 22:31:13 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.399 22:31:13 json_config -- json_config/common.sh@10 -- # shift 00:04:56.399 22:31:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.399 22:31:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.399 22:31:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.399 22:31:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.399 22:31:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.399 Waiting for target to run... 00:04:56.399 22:31:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59435 00:04:56.399 22:31:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.399 22:31:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.399 22:31:13 json_config -- json_config/common.sh@25 -- # waitforlisten 59435 /var/tmp/spdk_tgt.sock 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@829 -- # '[' -z 59435 ']' 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.399 22:31:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.399 [2024-07-15 22:31:14.032016] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:56.399 [2024-07-15 22:31:14.032102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:04:56.657 [2024-07-15 22:31:14.475437] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.915 [2024-07-15 22:31:14.583573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.915 [2024-07-15 22:31:14.710382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:57.173 [2024-07-15 22:31:14.923962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.173 [2024-07-15 22:31:14.956042] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.431 00:04:57.431 INFO: Checking if target configuration is the same... 00:04:57.431 22:31:15 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.431 22:31:15 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:57.431 22:31:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.431 22:31:15 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:57.431 22:31:15 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:57.431 22:31:15 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.431 22:31:15 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:57.431 22:31:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.431 + '[' 2 -ne 2 ']' 00:04:57.431 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:57.431 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:57.431 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.431 +++ basename /dev/fd/62 00:04:57.431 ++ mktemp /tmp/62.XXX 00:04:57.431 + tmp_file_1=/tmp/62.uAc 00:04:57.431 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.431 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.431 + tmp_file_2=/tmp/spdk_tgt_config.json.o9F 00:04:57.431 + ret=0 00:04:57.431 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.949 + diff -u /tmp/62.uAc /tmp/spdk_tgt_config.json.o9F 00:04:57.949 INFO: JSON config files are the same 00:04:57.949 + echo 'INFO: JSON config files are the same' 00:04:57.949 + rm /tmp/62.uAc /tmp/spdk_tgt_config.json.o9F 00:04:57.949 + exit 0 00:04:57.949 INFO: changing configuration and checking if this can be detected... 00:04:57.949 22:31:15 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:57.949 22:31:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:57.949 22:31:15 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.949 22:31:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.207 22:31:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:58.207 22:31:15 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.207 22:31:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.207 + '[' 2 -ne 2 ']' 00:04:58.207 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:58.207 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:58.207 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:58.207 +++ basename /dev/fd/62 00:04:58.207 ++ mktemp /tmp/62.XXX 00:04:58.207 + tmp_file_1=/tmp/62.W1R 00:04:58.207 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.207 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.207 + tmp_file_2=/tmp/spdk_tgt_config.json.R5g 00:04:58.207 + ret=0 00:04:58.207 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.466 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.466 + diff -u /tmp/62.W1R /tmp/spdk_tgt_config.json.R5g 00:04:58.466 + ret=1 00:04:58.466 + echo '=== Start of file: /tmp/62.W1R ===' 00:04:58.466 + cat /tmp/62.W1R 00:04:58.466 + echo '=== End of file: /tmp/62.W1R ===' 00:04:58.466 + echo '' 00:04:58.466 + echo '=== Start of file: /tmp/spdk_tgt_config.json.R5g ===' 00:04:58.466 + cat /tmp/spdk_tgt_config.json.R5g 00:04:58.466 + echo '=== End of file: /tmp/spdk_tgt_config.json.R5g ===' 00:04:58.466 + echo '' 00:04:58.466 + rm /tmp/62.W1R /tmp/spdk_tgt_config.json.R5g 00:04:58.466 + exit 1 00:04:58.466 INFO: configuration change detected. 00:04:58.466 22:31:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:58.466 22:31:16 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:58.466 22:31:16 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:58.466 22:31:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.466 22:31:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@317 -- # [[ -n 59435 ]] 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.735 22:31:16 json_config -- json_config/json_config.sh@323 -- # killprocess 59435 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@948 -- # '[' -z 59435 ']' 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@952 -- # kill -0 59435 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@953 -- # uname 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59435 00:04:58.735 killing process with pid 59435 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59435' 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@967 -- # kill 59435 00:04:58.735 22:31:16 json_config -- common/autotest_common.sh@972 -- # wait 59435 00:04:59.006 22:31:16 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:59.006 22:31:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:59.006 22:31:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.006 22:31:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.006 INFO: Success 00:04:59.006 22:31:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:59.006 22:31:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:59.006 00:04:59.006 real 0m8.308s 00:04:59.006 user 0m11.771s 00:04:59.006 sys 0m1.829s 00:04:59.006 ************************************ 00:04:59.006 END TEST json_config 00:04:59.006 ************************************ 00:04:59.006 22:31:16 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.006 22:31:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.006 22:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.006 22:31:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.006 22:31:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.006 22:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.006 22:31:16 -- common/autotest_common.sh@10 -- # set +x 00:04:59.006 ************************************ 00:04:59.006 START TEST json_config_extra_key 00:04:59.006 ************************************ 00:04:59.006 22:31:16 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.006 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.006 22:31:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.007 22:31:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.007 22:31:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.007 22:31:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.007 22:31:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.007 22:31:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.007 22:31:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.007 22:31:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.007 22:31:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.007 22:31:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.007 INFO: launching applications... 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.007 22:31:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.007 Waiting for target to run... 00:04:59.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59575 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59575 /var/tmp/spdk_tgt.sock 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59575 ']' 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.007 22:31:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.007 22:31:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.265 [2024-07-15 22:31:16.900584] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:59.265 [2024-07-15 22:31:16.900984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:04:59.524 [2024-07-15 22:31:17.355812] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.782 [2024-07-15 22:31:17.450801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.782 [2024-07-15 22:31:17.471913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.347 22:31:17 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.347 00:05:00.347 INFO: shutting down applications... 00:05:00.347 22:31:17 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.347 22:31:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.347 22:31:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59575 ]] 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59575 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59575 00:05:00.347 22:31:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59575 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.604 SPDK target shutdown done 00:05:00.604 Success 00:05:00.604 22:31:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.604 22:31:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.604 00:05:00.604 real 0m1.687s 00:05:00.604 user 0m1.606s 00:05:00.604 sys 0m0.505s 00:05:00.604 ************************************ 00:05:00.604 END TEST json_config_extra_key 00:05:00.604 ************************************ 00:05:00.604 22:31:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.604 22:31:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.863 22:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.863 22:31:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.863 22:31:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.863 22:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.863 22:31:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.863 ************************************ 00:05:00.863 START TEST alias_rpc 00:05:00.863 ************************************ 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.863 * Looking for test storage... 00:05:00.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:00.863 22:31:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.863 22:31:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59645 00:05:00.863 22:31:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59645 00:05:00.863 22:31:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59645 ']' 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.863 22:31:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.863 [2024-07-15 22:31:18.637225] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:00.863 [2024-07-15 22:31:18.637606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59645 ] 00:05:01.121 [2024-07-15 22:31:18.773080] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.121 [2024-07-15 22:31:18.897978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.377 [2024-07-15 22:31:18.957180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.943 22:31:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.943 22:31:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.943 22:31:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:02.201 22:31:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59645 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59645 ']' 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59645 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59645 00:05:02.201 killing process with pid 59645 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59645' 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 59645 00:05:02.201 22:31:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 59645 00:05:02.767 ************************************ 00:05:02.767 END TEST alias_rpc 00:05:02.767 ************************************ 00:05:02.767 00:05:02.767 real 0m1.939s 00:05:02.767 user 0m2.241s 00:05:02.767 sys 0m0.448s 00:05:02.767 22:31:20 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.767 22:31:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.767 22:31:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.767 22:31:20 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:02.767 22:31:20 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.767 22:31:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.767 22:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.767 22:31:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.767 ************************************ 00:05:02.767 START TEST spdkcli_tcp 00:05:02.767 ************************************ 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.767 * Looking for test storage... 00:05:02.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59721 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59721 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59721 ']' 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.767 22:31:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.767 22:31:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.026 [2024-07-15 22:31:20.645612] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:03.026 [2024-07-15 22:31:20.645748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:05:03.026 [2024-07-15 22:31:20.789855] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.284 [2024-07-15 22:31:20.937428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.284 [2024-07-15 22:31:20.937442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.284 [2024-07-15 22:31:21.000918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.849 22:31:21 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.849 22:31:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:03.850 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59738 00:05:03.850 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.850 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.108 [ 00:05:04.108 "bdev_malloc_delete", 00:05:04.108 "bdev_malloc_create", 00:05:04.108 "bdev_null_resize", 00:05:04.108 "bdev_null_delete", 00:05:04.108 "bdev_null_create", 00:05:04.108 "bdev_nvme_cuse_unregister", 00:05:04.108 "bdev_nvme_cuse_register", 00:05:04.108 "bdev_opal_new_user", 00:05:04.108 "bdev_opal_set_lock_state", 00:05:04.108 "bdev_opal_delete", 00:05:04.108 "bdev_opal_get_info", 00:05:04.108 "bdev_opal_create", 00:05:04.108 "bdev_nvme_opal_revert", 00:05:04.108 "bdev_nvme_opal_init", 00:05:04.108 "bdev_nvme_send_cmd", 00:05:04.108 "bdev_nvme_get_path_iostat", 00:05:04.108 "bdev_nvme_get_mdns_discovery_info", 00:05:04.108 "bdev_nvme_stop_mdns_discovery", 00:05:04.108 "bdev_nvme_start_mdns_discovery", 00:05:04.108 "bdev_nvme_set_multipath_policy", 00:05:04.108 "bdev_nvme_set_preferred_path", 00:05:04.108 "bdev_nvme_get_io_paths", 00:05:04.108 "bdev_nvme_remove_error_injection", 00:05:04.108 "bdev_nvme_add_error_injection", 00:05:04.108 "bdev_nvme_get_discovery_info", 00:05:04.108 "bdev_nvme_stop_discovery", 00:05:04.108 "bdev_nvme_start_discovery", 00:05:04.108 "bdev_nvme_get_controller_health_info", 00:05:04.108 "bdev_nvme_disable_controller", 00:05:04.108 "bdev_nvme_enable_controller", 00:05:04.108 "bdev_nvme_reset_controller", 00:05:04.108 "bdev_nvme_get_transport_statistics", 00:05:04.108 "bdev_nvme_apply_firmware", 00:05:04.108 "bdev_nvme_detach_controller", 00:05:04.108 "bdev_nvme_get_controllers", 00:05:04.108 "bdev_nvme_attach_controller", 00:05:04.108 "bdev_nvme_set_hotplug", 00:05:04.108 "bdev_nvme_set_options", 00:05:04.108 "bdev_passthru_delete", 00:05:04.108 "bdev_passthru_create", 00:05:04.108 "bdev_lvol_set_parent_bdev", 00:05:04.108 "bdev_lvol_set_parent", 00:05:04.108 "bdev_lvol_check_shallow_copy", 00:05:04.108 "bdev_lvol_start_shallow_copy", 00:05:04.108 "bdev_lvol_grow_lvstore", 00:05:04.108 "bdev_lvol_get_lvols", 00:05:04.108 "bdev_lvol_get_lvstores", 00:05:04.108 "bdev_lvol_delete", 00:05:04.108 "bdev_lvol_set_read_only", 00:05:04.108 "bdev_lvol_resize", 00:05:04.108 "bdev_lvol_decouple_parent", 00:05:04.108 "bdev_lvol_inflate", 00:05:04.108 "bdev_lvol_rename", 00:05:04.108 "bdev_lvol_clone_bdev", 00:05:04.108 "bdev_lvol_clone", 00:05:04.108 "bdev_lvol_snapshot", 00:05:04.108 "bdev_lvol_create", 00:05:04.108 "bdev_lvol_delete_lvstore", 00:05:04.108 "bdev_lvol_rename_lvstore", 00:05:04.108 "bdev_lvol_create_lvstore", 00:05:04.108 "bdev_raid_set_options", 00:05:04.108 "bdev_raid_remove_base_bdev", 00:05:04.108 "bdev_raid_add_base_bdev", 00:05:04.108 "bdev_raid_delete", 00:05:04.108 "bdev_raid_create", 00:05:04.108 "bdev_raid_get_bdevs", 00:05:04.108 "bdev_error_inject_error", 00:05:04.108 "bdev_error_delete", 00:05:04.108 "bdev_error_create", 00:05:04.108 "bdev_split_delete", 00:05:04.108 "bdev_split_create", 00:05:04.108 "bdev_delay_delete", 00:05:04.108 "bdev_delay_create", 00:05:04.108 "bdev_delay_update_latency", 00:05:04.108 "bdev_zone_block_delete", 00:05:04.108 "bdev_zone_block_create", 00:05:04.108 "blobfs_create", 00:05:04.108 "blobfs_detect", 00:05:04.108 "blobfs_set_cache_size", 00:05:04.108 "bdev_aio_delete", 00:05:04.108 "bdev_aio_rescan", 00:05:04.108 "bdev_aio_create", 00:05:04.108 "bdev_ftl_set_property", 00:05:04.108 "bdev_ftl_get_properties", 00:05:04.108 "bdev_ftl_get_stats", 00:05:04.108 "bdev_ftl_unmap", 00:05:04.108 "bdev_ftl_unload", 00:05:04.108 "bdev_ftl_delete", 00:05:04.108 "bdev_ftl_load", 00:05:04.108 "bdev_ftl_create", 00:05:04.108 "bdev_virtio_attach_controller", 00:05:04.108 "bdev_virtio_scsi_get_devices", 00:05:04.108 "bdev_virtio_detach_controller", 00:05:04.108 "bdev_virtio_blk_set_hotplug", 00:05:04.108 "bdev_iscsi_delete", 00:05:04.108 "bdev_iscsi_create", 00:05:04.108 "bdev_iscsi_set_options", 00:05:04.108 "bdev_uring_delete", 00:05:04.108 "bdev_uring_rescan", 00:05:04.108 "bdev_uring_create", 00:05:04.108 "accel_error_inject_error", 00:05:04.108 "ioat_scan_accel_module", 00:05:04.108 "dsa_scan_accel_module", 00:05:04.108 "iaa_scan_accel_module", 00:05:04.108 "keyring_file_remove_key", 00:05:04.108 "keyring_file_add_key", 00:05:04.108 "keyring_linux_set_options", 00:05:04.108 "iscsi_get_histogram", 00:05:04.108 "iscsi_enable_histogram", 00:05:04.108 "iscsi_set_options", 00:05:04.108 "iscsi_get_auth_groups", 00:05:04.108 "iscsi_auth_group_remove_secret", 00:05:04.108 "iscsi_auth_group_add_secret", 00:05:04.108 "iscsi_delete_auth_group", 00:05:04.108 "iscsi_create_auth_group", 00:05:04.108 "iscsi_set_discovery_auth", 00:05:04.108 "iscsi_get_options", 00:05:04.108 "iscsi_target_node_request_logout", 00:05:04.108 "iscsi_target_node_set_redirect", 00:05:04.108 "iscsi_target_node_set_auth", 00:05:04.108 "iscsi_target_node_add_lun", 00:05:04.108 "iscsi_get_stats", 00:05:04.108 "iscsi_get_connections", 00:05:04.108 "iscsi_portal_group_set_auth", 00:05:04.108 "iscsi_start_portal_group", 00:05:04.108 "iscsi_delete_portal_group", 00:05:04.108 "iscsi_create_portal_group", 00:05:04.108 "iscsi_get_portal_groups", 00:05:04.108 "iscsi_delete_target_node", 00:05:04.108 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.108 "iscsi_target_node_add_pg_ig_maps", 00:05:04.108 "iscsi_create_target_node", 00:05:04.108 "iscsi_get_target_nodes", 00:05:04.108 "iscsi_delete_initiator_group", 00:05:04.108 "iscsi_initiator_group_remove_initiators", 00:05:04.108 "iscsi_initiator_group_add_initiators", 00:05:04.108 "iscsi_create_initiator_group", 00:05:04.108 "iscsi_get_initiator_groups", 00:05:04.108 "nvmf_set_crdt", 00:05:04.108 "nvmf_set_config", 00:05:04.108 "nvmf_set_max_subsystems", 00:05:04.108 "nvmf_stop_mdns_prr", 00:05:04.108 "nvmf_publish_mdns_prr", 00:05:04.108 "nvmf_subsystem_get_listeners", 00:05:04.108 "nvmf_subsystem_get_qpairs", 00:05:04.108 "nvmf_subsystem_get_controllers", 00:05:04.108 "nvmf_get_stats", 00:05:04.108 "nvmf_get_transports", 00:05:04.108 "nvmf_create_transport", 00:05:04.108 "nvmf_get_targets", 00:05:04.108 "nvmf_delete_target", 00:05:04.108 "nvmf_create_target", 00:05:04.109 "nvmf_subsystem_allow_any_host", 00:05:04.109 "nvmf_subsystem_remove_host", 00:05:04.109 "nvmf_subsystem_add_host", 00:05:04.109 "nvmf_ns_remove_host", 00:05:04.109 "nvmf_ns_add_host", 00:05:04.109 "nvmf_subsystem_remove_ns", 00:05:04.109 "nvmf_subsystem_add_ns", 00:05:04.109 "nvmf_subsystem_listener_set_ana_state", 00:05:04.109 "nvmf_discovery_get_referrals", 00:05:04.109 "nvmf_discovery_remove_referral", 00:05:04.109 "nvmf_discovery_add_referral", 00:05:04.109 "nvmf_subsystem_remove_listener", 00:05:04.109 "nvmf_subsystem_add_listener", 00:05:04.109 "nvmf_delete_subsystem", 00:05:04.109 "nvmf_create_subsystem", 00:05:04.109 "nvmf_get_subsystems", 00:05:04.109 "env_dpdk_get_mem_stats", 00:05:04.109 "nbd_get_disks", 00:05:04.109 "nbd_stop_disk", 00:05:04.109 "nbd_start_disk", 00:05:04.109 "ublk_recover_disk", 00:05:04.109 "ublk_get_disks", 00:05:04.109 "ublk_stop_disk", 00:05:04.109 "ublk_start_disk", 00:05:04.109 "ublk_destroy_target", 00:05:04.109 "ublk_create_target", 00:05:04.109 "virtio_blk_create_transport", 00:05:04.109 "virtio_blk_get_transports", 00:05:04.109 "vhost_controller_set_coalescing", 00:05:04.109 "vhost_get_controllers", 00:05:04.109 "vhost_delete_controller", 00:05:04.109 "vhost_create_blk_controller", 00:05:04.109 "vhost_scsi_controller_remove_target", 00:05:04.109 "vhost_scsi_controller_add_target", 00:05:04.109 "vhost_start_scsi_controller", 00:05:04.109 "vhost_create_scsi_controller", 00:05:04.109 "thread_set_cpumask", 00:05:04.109 "framework_get_governor", 00:05:04.109 "framework_get_scheduler", 00:05:04.109 "framework_set_scheduler", 00:05:04.109 "framework_get_reactors", 00:05:04.109 "thread_get_io_channels", 00:05:04.109 "thread_get_pollers", 00:05:04.109 "thread_get_stats", 00:05:04.109 "framework_monitor_context_switch", 00:05:04.109 "spdk_kill_instance", 00:05:04.109 "log_enable_timestamps", 00:05:04.109 "log_get_flags", 00:05:04.109 "log_clear_flag", 00:05:04.109 "log_set_flag", 00:05:04.109 "log_get_level", 00:05:04.109 "log_set_level", 00:05:04.109 "log_get_print_level", 00:05:04.109 "log_set_print_level", 00:05:04.109 "framework_enable_cpumask_locks", 00:05:04.109 "framework_disable_cpumask_locks", 00:05:04.109 "framework_wait_init", 00:05:04.109 "framework_start_init", 00:05:04.109 "scsi_get_devices", 00:05:04.109 "bdev_get_histogram", 00:05:04.109 "bdev_enable_histogram", 00:05:04.109 "bdev_set_qos_limit", 00:05:04.109 "bdev_set_qd_sampling_period", 00:05:04.109 "bdev_get_bdevs", 00:05:04.109 "bdev_reset_iostat", 00:05:04.109 "bdev_get_iostat", 00:05:04.109 "bdev_examine", 00:05:04.109 "bdev_wait_for_examine", 00:05:04.109 "bdev_set_options", 00:05:04.109 "notify_get_notifications", 00:05:04.109 "notify_get_types", 00:05:04.109 "accel_get_stats", 00:05:04.109 "accel_set_options", 00:05:04.109 "accel_set_driver", 00:05:04.109 "accel_crypto_key_destroy", 00:05:04.109 "accel_crypto_keys_get", 00:05:04.109 "accel_crypto_key_create", 00:05:04.109 "accel_assign_opc", 00:05:04.109 "accel_get_module_info", 00:05:04.109 "accel_get_opc_assignments", 00:05:04.109 "vmd_rescan", 00:05:04.109 "vmd_remove_device", 00:05:04.109 "vmd_enable", 00:05:04.109 "sock_get_default_impl", 00:05:04.109 "sock_set_default_impl", 00:05:04.109 "sock_impl_set_options", 00:05:04.109 "sock_impl_get_options", 00:05:04.109 "iobuf_get_stats", 00:05:04.109 "iobuf_set_options", 00:05:04.109 "framework_get_pci_devices", 00:05:04.109 "framework_get_config", 00:05:04.109 "framework_get_subsystems", 00:05:04.109 "trace_get_info", 00:05:04.109 "trace_get_tpoint_group_mask", 00:05:04.109 "trace_disable_tpoint_group", 00:05:04.109 "trace_enable_tpoint_group", 00:05:04.109 "trace_clear_tpoint_mask", 00:05:04.109 "trace_set_tpoint_mask", 00:05:04.109 "keyring_get_keys", 00:05:04.109 "spdk_get_version", 00:05:04.109 "rpc_get_methods" 00:05:04.109 ] 00:05:04.109 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:04.109 22:31:21 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.109 22:31:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.368 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:04.368 22:31:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59721 00:05:04.368 22:31:21 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59721 ']' 00:05:04.368 22:31:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59721 00:05:04.368 22:31:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:04.368 22:31:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.368 22:31:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59721 00:05:04.368 killing process with pid 59721 00:05:04.368 22:31:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.368 22:31:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.368 22:31:22 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59721' 00:05:04.368 22:31:22 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59721 00:05:04.368 22:31:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59721 00:05:04.627 ************************************ 00:05:04.627 END TEST spdkcli_tcp 00:05:04.627 ************************************ 00:05:04.627 00:05:04.627 real 0m1.937s 00:05:04.627 user 0m3.572s 00:05:04.627 sys 0m0.515s 00:05:04.627 22:31:22 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.627 22:31:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.627 22:31:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.627 22:31:22 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.627 22:31:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.627 22:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.627 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:05:04.885 ************************************ 00:05:04.885 START TEST dpdk_mem_utility 00:05:04.885 ************************************ 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.885 * Looking for test storage... 00:05:04.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:04.885 22:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.885 22:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59812 00:05:04.885 22:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.885 22:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59812 00:05:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59812 ']' 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.885 22:31:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.885 [2024-07-15 22:31:22.609319] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:04.885 [2024-07-15 22:31:22.609406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:05:05.143 [2024-07-15 22:31:22.743149] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.143 [2024-07-15 22:31:22.856920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.143 [2024-07-15 22:31:22.914715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.076 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.076 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:06.076 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:06.076 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:06.076 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.076 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.076 { 00:05:06.076 "filename": "/tmp/spdk_mem_dump.txt" 00:05:06.076 } 00:05:06.077 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.077 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.077 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:06.077 1 heaps totaling size 814.000000 MiB 00:05:06.077 size: 814.000000 MiB heap id: 0 00:05:06.077 end heaps---------- 00:05:06.077 8 mempools totaling size 598.116089 MiB 00:05:06.077 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:06.077 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:06.077 size: 84.521057 MiB name: bdev_io_59812 00:05:06.077 size: 51.011292 MiB name: evtpool_59812 00:05:06.077 size: 50.003479 MiB name: msgpool_59812 00:05:06.077 size: 21.763794 MiB name: PDU_Pool 00:05:06.077 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:06.077 size: 0.026123 MiB name: Session_Pool 00:05:06.077 end mempools------- 00:05:06.077 6 memzones totaling size 4.142822 MiB 00:05:06.077 size: 1.000366 MiB name: RG_ring_0_59812 00:05:06.077 size: 1.000366 MiB name: RG_ring_1_59812 00:05:06.077 size: 1.000366 MiB name: RG_ring_4_59812 00:05:06.077 size: 1.000366 MiB name: RG_ring_5_59812 00:05:06.077 size: 0.125366 MiB name: RG_ring_2_59812 00:05:06.077 size: 0.015991 MiB name: RG_ring_3_59812 00:05:06.077 end memzones------- 00:05:06.077 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:06.077 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:06.077 list of free elements. size: 12.471375 MiB 00:05:06.077 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:06.077 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:06.077 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:06.077 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:06.077 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:06.077 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:06.077 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:06.077 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:06.077 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:06.077 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:05:06.077 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:06.077 element at address: 0x200000800000 with size: 0.486328 MiB 00:05:06.077 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:06.077 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:06.077 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:06.077 list of standard malloc elements. size: 199.266052 MiB 00:05:06.077 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:06.077 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:06.077 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:06.077 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:06.077 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:06.077 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:06.077 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:06.077 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:06.077 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:06.077 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:06.077 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:06.078 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:06.078 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:06.079 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:06.079 list of memzone associated elements. size: 602.262573 MiB 00:05:06.079 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:06.079 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:06.079 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:06.079 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:06.079 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:06.079 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59812_0 00:05:06.079 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:06.079 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59812_0 00:05:06.079 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:06.079 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59812_0 00:05:06.079 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:06.079 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:06.079 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:06.079 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:06.079 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:06.079 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59812 00:05:06.079 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:06.079 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59812 00:05:06.079 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:06.079 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59812 00:05:06.079 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:06.079 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:06.079 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:06.079 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:06.079 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:06.079 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:06.079 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:06.079 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:06.079 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:06.079 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59812 00:05:06.079 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:06.079 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59812 00:05:06.079 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:06.079 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59812 00:05:06.079 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:06.079 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59812 00:05:06.079 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:06.079 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59812 00:05:06.079 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:06.079 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:06.079 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:06.079 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:06.079 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:06.079 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:06.079 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:06.079 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59812 00:05:06.079 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:06.079 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:06.079 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:06.079 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:06.079 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:06.079 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59812 00:05:06.079 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:06.079 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:06.079 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:06.079 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59812 00:05:06.079 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:06.079 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59812 00:05:06.079 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:06.079 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:06.079 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:06.079 22:31:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59812 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59812 ']' 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59812 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59812 00:05:06.079 killing process with pid 59812 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59812' 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59812 00:05:06.079 22:31:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59812 00:05:06.337 ************************************ 00:05:06.337 END TEST dpdk_mem_utility 00:05:06.337 ************************************ 00:05:06.337 00:05:06.337 real 0m1.703s 00:05:06.337 user 0m1.835s 00:05:06.337 sys 0m0.448s 00:05:06.337 22:31:24 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.337 22:31:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 22:31:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.595 22:31:24 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.595 22:31:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.595 22:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.595 22:31:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 ************************************ 00:05:06.595 START TEST event 00:05:06.595 ************************************ 00:05:06.595 22:31:24 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.595 * Looking for test storage... 00:05:06.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.595 22:31:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:06.595 22:31:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.595 22:31:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.595 22:31:24 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:06.595 22:31:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.595 22:31:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 ************************************ 00:05:06.595 START TEST event_perf 00:05:06.595 ************************************ 00:05:06.595 22:31:24 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.595 Running I/O for 1 seconds...[2024-07-15 22:31:24.348784] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:06.595 [2024-07-15 22:31:24.349073] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59884 ] 00:05:06.853 [2024-07-15 22:31:24.490309] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.853 [2024-07-15 22:31:24.625014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.853 [2024-07-15 22:31:24.625151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.853 [2024-07-15 22:31:24.625311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.853 [2024-07-15 22:31:24.625312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.252 Running I/O for 1 seconds... 00:05:08.252 lcore 0: 116528 00:05:08.252 lcore 1: 116529 00:05:08.252 lcore 2: 116531 00:05:08.252 lcore 3: 116531 00:05:08.252 done. 00:05:08.252 00:05:08.252 real 0m1.397s 00:05:08.252 user 0m4.201s 00:05:08.252 sys 0m0.072s 00:05:08.252 22:31:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.252 22:31:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.252 ************************************ 00:05:08.252 END TEST event_perf 00:05:08.252 ************************************ 00:05:08.252 22:31:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.252 22:31:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.252 22:31:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:08.252 22:31:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.252 22:31:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.252 ************************************ 00:05:08.252 START TEST event_reactor 00:05:08.252 ************************************ 00:05:08.252 22:31:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.252 [2024-07-15 22:31:25.802002] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:08.252 [2024-07-15 22:31:25.802102] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:05:08.252 [2024-07-15 22:31:25.943102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.252 [2024-07-15 22:31:26.068385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.627 test_start 00:05:09.627 oneshot 00:05:09.627 tick 100 00:05:09.627 tick 100 00:05:09.627 tick 250 00:05:09.627 tick 100 00:05:09.627 tick 100 00:05:09.627 tick 250 00:05:09.627 tick 100 00:05:09.627 tick 500 00:05:09.627 tick 100 00:05:09.627 tick 100 00:05:09.627 tick 250 00:05:09.627 tick 100 00:05:09.627 tick 100 00:05:09.627 test_end 00:05:09.627 ************************************ 00:05:09.627 END TEST event_reactor 00:05:09.627 ************************************ 00:05:09.627 00:05:09.627 real 0m1.385s 00:05:09.627 user 0m1.205s 00:05:09.627 sys 0m0.072s 00:05:09.627 22:31:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.627 22:31:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:09.627 22:31:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:09.627 22:31:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.627 22:31:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:09.627 22:31:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.627 22:31:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.627 ************************************ 00:05:09.627 START TEST event_reactor_perf 00:05:09.627 ************************************ 00:05:09.627 22:31:27 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.628 [2024-07-15 22:31:27.236984] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:09.628 [2024-07-15 22:31:27.237766] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:05:09.628 [2024-07-15 22:31:27.374046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.885 [2024-07-15 22:31:27.502917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.818 test_start 00:05:10.818 test_end 00:05:10.818 Performance: 363932 events per second 00:05:10.818 00:05:10.818 real 0m1.384s 00:05:10.818 user 0m1.212s 00:05:10.818 sys 0m0.064s 00:05:10.818 22:31:28 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.818 ************************************ 00:05:10.818 END TEST event_reactor_perf 00:05:10.818 22:31:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 ************************************ 00:05:10.818 22:31:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.818 22:31:28 event -- event/event.sh@49 -- # uname -s 00:05:10.818 22:31:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.818 22:31:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.818 22:31:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.818 22:31:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.818 22:31:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.076 ************************************ 00:05:11.076 START TEST event_scheduler 00:05:11.076 ************************************ 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.076 * Looking for test storage... 00:05:11.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:11.076 22:31:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.076 22:31:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60019 00:05:11.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.076 22:31:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.076 22:31:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.076 22:31:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60019 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60019 ']' 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.076 22:31:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.076 [2024-07-15 22:31:28.801985] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:11.076 [2024-07-15 22:31:28.802085] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60019 ] 00:05:11.334 [2024-07-15 22:31:28.945155] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.334 [2024-07-15 22:31:29.093575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.334 [2024-07-15 22:31:29.093666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.334 [2024-07-15 22:31:29.093813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.334 [2024-07-15 22:31:29.093820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:12.266 22:31:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.266 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.266 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.266 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.266 POWER: Cannot set governor of lcore 0 to performance 00:05:12.266 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.266 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.266 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.266 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.266 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:12.266 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:12.266 POWER: Unable to set Power Management Environment for lcore 0 00:05:12.266 [2024-07-15 22:31:29.847087] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:12.266 [2024-07-15 22:31:29.847284] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:12.266 [2024-07-15 22:31:29.847518] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.266 [2024-07-15 22:31:29.847634] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.266 [2024-07-15 22:31:29.847819] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.266 [2024-07-15 22:31:29.848052] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.266 22:31:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.266 22:31:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 [2024-07-15 22:31:29.915426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.267 [2024-07-15 22:31:29.955170] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.267 22:31:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.267 22:31:29 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.267 22:31:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.267 22:31:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 ************************************ 00:05:12.267 START TEST scheduler_create_thread 00:05:12.267 ************************************ 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 2 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 3 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 4 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 5 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 6 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 7 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 8 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 9 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 10 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.267 22:31:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 22:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.165 22:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.165 22:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.165 22:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.165 22:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 ************************************ 00:05:15.098 END TEST scheduler_create_thread 00:05:15.098 ************************************ 00:05:15.098 22:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.098 00:05:15.098 real 0m2.615s 00:05:15.098 user 0m0.019s 00:05:15.098 sys 0m0.006s 00:05:15.098 22:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.098 22:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:15.098 22:31:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.098 22:31:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60019 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60019 ']' 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60019 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60019 00:05:15.098 killing process with pid 60019 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60019' 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60019 00:05:15.098 22:31:32 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60019 00:05:15.356 [2024-07-15 22:31:33.064097] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.613 ************************************ 00:05:15.613 END TEST event_scheduler 00:05:15.613 ************************************ 00:05:15.613 00:05:15.613 real 0m4.674s 00:05:15.613 user 0m8.779s 00:05:15.613 sys 0m0.428s 00:05:15.613 22:31:33 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.613 22:31:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 22:31:33 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.613 22:31:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.613 22:31:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.613 22:31:33 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.613 22:31:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.613 22:31:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 ************************************ 00:05:15.613 START TEST app_repeat 00:05:15.613 ************************************ 00:05:15.613 22:31:33 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60119 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.613 Process app_repeat pid: 60119 00:05:15.613 22:31:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60119' 00:05:15.614 22:31:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.614 spdk_app_start Round 0 00:05:15.614 22:31:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.614 22:31:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.614 22:31:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60119 /var/tmp/spdk-nbd.sock 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60119 ']' 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.614 22:31:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.614 [2024-07-15 22:31:33.425639] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:15.614 [2024-07-15 22:31:33.425716] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60119 ] 00:05:15.871 [2024-07-15 22:31:33.560374] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.871 [2024-07-15 22:31:33.690385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.871 [2024-07-15 22:31:33.690395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.130 [2024-07-15 22:31:33.752653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.694 22:31:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.694 22:31:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.694 22:31:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.952 Malloc0 00:05:16.952 22:31:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.210 Malloc1 00:05:17.468 22:31:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.468 22:31:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.726 /dev/nbd0 00:05:17.726 22:31:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.726 22:31:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.726 1+0 records in 00:05:17.726 1+0 records out 00:05:17.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003878 s, 10.6 MB/s 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.726 22:31:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.726 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.726 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.726 22:31:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.984 /dev/nbd1 00:05:17.984 22:31:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.984 22:31:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.984 1+0 records in 00:05:17.984 1+0 records out 00:05:17.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588129 s, 7.0 MB/s 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.984 22:31:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.985 22:31:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.985 22:31:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.985 22:31:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.985 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.985 22:31:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.985 22:31:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.985 22:31:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.985 22:31:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.243 22:31:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.243 { 00:05:18.243 "nbd_device": "/dev/nbd0", 00:05:18.243 "bdev_name": "Malloc0" 00:05:18.243 }, 00:05:18.243 { 00:05:18.243 "nbd_device": "/dev/nbd1", 00:05:18.243 "bdev_name": "Malloc1" 00:05:18.243 } 00:05:18.243 ]' 00:05:18.243 22:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.243 { 00:05:18.243 "nbd_device": "/dev/nbd0", 00:05:18.243 "bdev_name": "Malloc0" 00:05:18.243 }, 00:05:18.243 { 00:05:18.243 "nbd_device": "/dev/nbd1", 00:05:18.243 "bdev_name": "Malloc1" 00:05:18.243 } 00:05:18.243 ]' 00:05:18.243 22:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.243 /dev/nbd1' 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.243 /dev/nbd1' 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.243 256+0 records in 00:05:18.243 256+0 records out 00:05:18.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610876 s, 172 MB/s 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.243 256+0 records in 00:05:18.243 256+0 records out 00:05:18.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217666 s, 48.2 MB/s 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.243 22:31:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.501 256+0 records in 00:05:18.501 256+0 records out 00:05:18.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025824 s, 40.6 MB/s 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.501 22:31:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.759 22:31:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.085 22:31:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.343 22:31:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.343 22:31:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.343 22:31:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.343 22:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.343 22:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.343 22:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.344 22:31:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.344 22:31:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.601 22:31:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.860 [2024-07-15 22:31:37.539176] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.860 [2024-07-15 22:31:37.638146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.860 [2024-07-15 22:31:37.638160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.117 [2024-07-15 22:31:37.699443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.117 [2024-07-15 22:31:37.699550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.117 [2024-07-15 22:31:37.699564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.643 spdk_app_start Round 1 00:05:22.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.643 22:31:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.643 22:31:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.643 22:31:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60119 /var/tmp/spdk-nbd.sock 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60119 ']' 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.643 22:31:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.900 22:31:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.900 22:31:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:22.900 22:31:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.156 Malloc0 00:05:23.156 22:31:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.413 Malloc1 00:05:23.413 22:31:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.413 22:31:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.414 22:31:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.671 /dev/nbd0 00:05:23.929 22:31:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.929 22:31:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.929 1+0 records in 00:05:23.929 1+0 records out 00:05:23.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262712 s, 15.6 MB/s 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.929 22:31:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.929 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.929 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.929 22:31:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.187 /dev/nbd1 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.187 1+0 records in 00:05:24.187 1+0 records out 00:05:24.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280204 s, 14.6 MB/s 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.187 22:31:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.187 22:31:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.445 { 00:05:24.445 "nbd_device": "/dev/nbd0", 00:05:24.445 "bdev_name": "Malloc0" 00:05:24.445 }, 00:05:24.445 { 00:05:24.445 "nbd_device": "/dev/nbd1", 00:05:24.445 "bdev_name": "Malloc1" 00:05:24.445 } 00:05:24.445 ]' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.445 { 00:05:24.445 "nbd_device": "/dev/nbd0", 00:05:24.445 "bdev_name": "Malloc0" 00:05:24.445 }, 00:05:24.445 { 00:05:24.445 "nbd_device": "/dev/nbd1", 00:05:24.445 "bdev_name": "Malloc1" 00:05:24.445 } 00:05:24.445 ]' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.445 /dev/nbd1' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.445 /dev/nbd1' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.445 256+0 records in 00:05:24.445 256+0 records out 00:05:24.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00843096 s, 124 MB/s 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.445 22:31:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.702 256+0 records in 00:05:24.702 256+0 records out 00:05:24.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278452 s, 37.7 MB/s 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.703 256+0 records in 00:05:24.703 256+0 records out 00:05:24.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314728 s, 33.3 MB/s 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.703 22:31:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.997 22:31:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.254 22:31:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.511 22:31:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.511 22:31:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.075 22:31:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.333 [2024-07-15 22:31:43.918840] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.333 [2024-07-15 22:31:44.079309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.333 [2024-07-15 22:31:44.079323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.333 [2024-07-15 22:31:44.163396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.333 [2024-07-15 22:31:44.163508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.333 [2024-07-15 22:31:44.163523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.863 spdk_app_start Round 2 00:05:28.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.863 22:31:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.863 22:31:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.863 22:31:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60119 /var/tmp/spdk-nbd.sock 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60119 ']' 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.864 22:31:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.122 22:31:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.122 22:31:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.122 22:31:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.380 Malloc0 00:05:29.380 22:31:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.638 Malloc1 00:05:29.638 22:31:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.638 22:31:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.897 /dev/nbd0 00:05:29.897 22:31:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.897 22:31:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.897 1+0 records in 00:05:29.897 1+0 records out 00:05:29.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608138 s, 6.7 MB/s 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.897 22:31:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.897 22:31:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.897 22:31:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.897 22:31:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.464 /dev/nbd1 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.464 1+0 records in 00:05:30.464 1+0 records out 00:05:30.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721472 s, 5.7 MB/s 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.464 22:31:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.464 22:31:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.723 { 00:05:30.723 "nbd_device": "/dev/nbd0", 00:05:30.723 "bdev_name": "Malloc0" 00:05:30.723 }, 00:05:30.723 { 00:05:30.723 "nbd_device": "/dev/nbd1", 00:05:30.723 "bdev_name": "Malloc1" 00:05:30.723 } 00:05:30.723 ]' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.723 { 00:05:30.723 "nbd_device": "/dev/nbd0", 00:05:30.723 "bdev_name": "Malloc0" 00:05:30.723 }, 00:05:30.723 { 00:05:30.723 "nbd_device": "/dev/nbd1", 00:05:30.723 "bdev_name": "Malloc1" 00:05:30.723 } 00:05:30.723 ]' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.723 /dev/nbd1' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.723 /dev/nbd1' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.723 256+0 records in 00:05:30.723 256+0 records out 00:05:30.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656258 s, 160 MB/s 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.723 256+0 records in 00:05:30.723 256+0 records out 00:05:30.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285586 s, 36.7 MB/s 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.723 256+0 records in 00:05:30.723 256+0 records out 00:05:30.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298847 s, 35.1 MB/s 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.723 22:31:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.982 22:31:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.240 22:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.240 22:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.240 22:31:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.240 22:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.241 22:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.808 22:31:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.809 22:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.809 22:31:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.809 22:31:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.809 22:31:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.067 22:31:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.326 [2024-07-15 22:31:50.044769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.585 [2024-07-15 22:31:50.196066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.585 [2024-07-15 22:31:50.196080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.585 [2024-07-15 22:31:50.269316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.585 [2024-07-15 22:31:50.269498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.585 [2024-07-15 22:31:50.269519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.116 22:31:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60119 /var/tmp/spdk-nbd.sock 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60119 ']' 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.116 22:31:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.374 22:31:53 event.app_repeat -- event/event.sh@39 -- # killprocess 60119 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60119 ']' 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60119 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60119 00:05:35.374 killing process with pid 60119 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60119' 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60119 00:05:35.374 22:31:53 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60119 00:05:35.632 spdk_app_start is called in Round 0. 00:05:35.632 Shutdown signal received, stop current app iteration 00:05:35.632 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:05:35.632 spdk_app_start is called in Round 1. 00:05:35.632 Shutdown signal received, stop current app iteration 00:05:35.632 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:05:35.632 spdk_app_start is called in Round 2. 00:05:35.632 Shutdown signal received, stop current app iteration 00:05:35.632 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:05:35.632 spdk_app_start is called in Round 3. 00:05:35.632 Shutdown signal received, stop current app iteration 00:05:35.632 22:31:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.632 22:31:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.632 00:05:35.632 real 0m19.965s 00:05:35.632 user 0m44.518s 00:05:35.632 sys 0m3.316s 00:05:35.632 22:31:53 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.632 22:31:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.632 ************************************ 00:05:35.632 END TEST app_repeat 00:05:35.632 ************************************ 00:05:35.632 22:31:53 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.632 22:31:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.632 22:31:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.632 22:31:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.632 22:31:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.632 22:31:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.632 ************************************ 00:05:35.632 START TEST cpu_locks 00:05:35.632 ************************************ 00:05:35.632 22:31:53 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.891 * Looking for test storage... 00:05:35.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.891 22:31:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.891 22:31:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.891 22:31:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.891 22:31:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.891 22:31:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.891 22:31:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.891 22:31:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.891 ************************************ 00:05:35.891 START TEST default_locks 00:05:35.891 ************************************ 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60557 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60557 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60557 ']' 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.891 22:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.891 [2024-07-15 22:31:53.571959] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:35.891 [2024-07-15 22:31:53.572265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:05:35.891 [2024-07-15 22:31:53.707972] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.149 [2024-07-15 22:31:53.868931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.149 [2024-07-15 22:31:53.943269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.103 22:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.103 22:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:37.103 22:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60557 00:05:37.103 22:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60557 00:05:37.103 22:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60557 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60557 ']' 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60557 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60557 00:05:37.374 killing process with pid 60557 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60557' 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60557 00:05:37.374 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60557 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60557 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60557 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.943 ERROR: process (pid: 60557) is no longer running 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60557 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60557 ']' 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60557) - No such process 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.943 ************************************ 00:05:37.943 END TEST default_locks 00:05:37.943 ************************************ 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.943 00:05:37.943 real 0m2.012s 00:05:37.943 user 0m2.123s 00:05:37.943 sys 0m0.667s 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.943 22:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 22:31:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.943 22:31:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.943 22:31:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.943 22:31:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.943 22:31:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 ************************************ 00:05:37.943 START TEST default_locks_via_rpc 00:05:37.943 ************************************ 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60609 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60609 00:05:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60609 ']' 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.943 22:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 [2024-07-15 22:31:55.642829] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:37.943 [2024-07-15 22:31:55.643197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60609 ] 00:05:38.201 [2024-07-15 22:31:55.783215] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.201 [2024-07-15 22:31:55.917723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.201 [2024-07-15 22:31:55.975653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60609 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60609 00:05:39.137 22:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60609 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60609 ']' 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60609 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60609 00:05:39.396 killing process with pid 60609 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60609' 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60609 00:05:39.396 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60609 00:05:39.964 00:05:39.964 real 0m2.058s 00:05:39.964 user 0m2.238s 00:05:39.964 sys 0m0.612s 00:05:39.964 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.964 22:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.964 ************************************ 00:05:39.964 END TEST default_locks_via_rpc 00:05:39.964 ************************************ 00:05:39.964 22:31:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:39.964 22:31:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.964 22:31:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.964 22:31:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.964 22:31:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.964 ************************************ 00:05:39.964 START TEST non_locking_app_on_locked_coremask 00:05:39.964 ************************************ 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60660 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60660 /var/tmp/spdk.sock 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60660 ']' 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.964 22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.964 [2024-07-15 22:31:57.748116] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:39.964 [2024-07-15 22:31:57.748488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:05:40.222 [2024-07-15 22:31:57.885443] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.222 [2024-07-15 22:31:58.038280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.481 [2024-07-15 22:31:58.120303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60676 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60676 /var/tmp/spdk2.sock 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.050 22:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.050 [2024-07-15 22:31:58.852306] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:41.050 [2024-07-15 22:31:58.853031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:05:41.309 [2024-07-15 22:31:58.999104] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.309 [2024-07-15 22:31:58.999160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.568 [2024-07-15 22:31:59.322562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.827 [2024-07-15 22:31:59.492261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.394 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.394 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:42.394 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60660 00:05:42.394 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60660 00:05:42.394 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60660 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60660 ']' 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60660 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60660 00:05:42.962 killing process with pid 60660 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60660' 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60660 00:05:42.962 22:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60660 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60676 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60676 ']' 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60676 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60676 00:05:43.897 killing process with pid 60676 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60676' 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60676 00:05:43.897 22:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60676 00:05:44.462 ************************************ 00:05:44.462 END TEST non_locking_app_on_locked_coremask 00:05:44.462 ************************************ 00:05:44.462 00:05:44.462 real 0m4.415s 00:05:44.462 user 0m4.740s 00:05:44.462 sys 0m1.334s 00:05:44.462 22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.462 22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.462 22:32:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.462 22:32:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.462 22:32:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.462 22:32:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.462 22:32:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.462 ************************************ 00:05:44.462 START TEST locking_app_on_unlocked_coremask 00:05:44.462 ************************************ 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60754 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60754 /var/tmp/spdk.sock 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60754 ']' 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.462 22:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.462 [2024-07-15 22:32:02.225976] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:44.462 [2024-07-15 22:32:02.226364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60754 ] 00:05:44.719 [2024-07-15 22:32:02.364372] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.719 [2024-07-15 22:32:02.364753] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.719 [2024-07-15 22:32:02.488478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.719 [2024-07-15 22:32:02.547318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60770 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60770 /var/tmp/spdk2.sock 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60770 ']' 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.652 22:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.652 [2024-07-15 22:32:03.366378] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:45.652 [2024-07-15 22:32:03.366760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60770 ] 00:05:45.908 [2024-07-15 22:32:03.511805] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.197 [2024-07-15 22:32:03.856624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.197 [2024-07-15 22:32:04.027339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.762 22:32:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.762 22:32:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.762 22:32:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60770 00:05:46.762 22:32:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60770 00:05:46.762 22:32:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60754 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60754 ']' 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60754 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60754 00:05:47.695 killing process with pid 60754 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60754' 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60754 00:05:47.695 22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60754 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60770 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60770 ']' 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60770 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60770 00:05:48.645 killing process with pid 60770 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60770' 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60770 00:05:48.645 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60770 00:05:49.216 ************************************ 00:05:49.216 END TEST locking_app_on_unlocked_coremask 00:05:49.216 ************************************ 00:05:49.216 00:05:49.216 real 0m4.584s 00:05:49.216 user 0m5.024s 00:05:49.216 sys 0m1.303s 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.216 22:32:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.216 22:32:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.216 22:32:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.216 22:32:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.216 22:32:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.216 ************************************ 00:05:49.216 START TEST locking_app_on_locked_coremask 00:05:49.216 ************************************ 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60837 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60837 /var/tmp/spdk.sock 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60837 ']' 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.216 22:32:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.216 [2024-07-15 22:32:06.872278] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:49.216 [2024-07-15 22:32:06.872399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ] 00:05:49.216 [2024-07-15 22:32:07.014758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.475 [2024-07-15 22:32:07.160366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.475 [2024-07-15 22:32:07.224010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60853 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60853 /var/tmp/spdk2.sock 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60853 /var/tmp/spdk2.sock 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60853 /var/tmp/spdk2.sock 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60853 ']' 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.042 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.300 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.300 22:32:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.300 [2024-07-15 22:32:07.937400] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:50.300 [2024-07-15 22:32:07.937733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:05:50.301 [2024-07-15 22:32:08.083016] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60837 has claimed it. 00:05:50.301 [2024-07-15 22:32:08.083110] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.866 ERROR: process (pid: 60853) is no longer running 00:05:50.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60853) - No such process 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60837 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60837 00:05:50.866 22:32:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60837 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60837 ']' 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60837 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60837 00:05:51.451 killing process with pid 60837 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60837' 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60837 00:05:51.451 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60837 00:05:52.020 00:05:52.020 real 0m2.750s 00:05:52.020 user 0m3.155s 00:05:52.020 sys 0m0.698s 00:05:52.020 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.020 22:32:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.020 ************************************ 00:05:52.020 END TEST locking_app_on_locked_coremask 00:05:52.020 ************************************ 00:05:52.020 22:32:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.020 22:32:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.020 22:32:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.020 22:32:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.020 22:32:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.020 ************************************ 00:05:52.020 START TEST locking_overlapped_coremask 00:05:52.020 ************************************ 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60904 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60904 /var/tmp/spdk.sock 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60904 ']' 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.020 22:32:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.020 [2024-07-15 22:32:09.679484] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:52.020 [2024-07-15 22:32:09.679580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60904 ] 00:05:52.020 [2024-07-15 22:32:09.825421] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.279 [2024-07-15 22:32:09.992612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.279 [2024-07-15 22:32:09.992772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.279 [2024-07-15 22:32:09.992811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.279 [2024-07-15 22:32:10.082442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60922 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60922 /var/tmp/spdk2.sock 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60922 /var/tmp/spdk2.sock 00:05:53.214 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60922 /var/tmp/spdk2.sock 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60922 ']' 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.215 22:32:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.215 [2024-07-15 22:32:10.743572] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:53.215 [2024-07-15 22:32:10.743705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:05:53.215 [2024-07-15 22:32:10.885971] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60904 has claimed it. 00:05:53.215 [2024-07-15 22:32:10.886089] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.782 ERROR: process (pid: 60922) is no longer running 00:05:53.782 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60922) - No such process 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60904 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60904 ']' 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60904 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60904 00:05:53.782 killing process with pid 60904 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60904' 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60904 00:05:53.782 22:32:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60904 00:05:54.348 00:05:54.348 real 0m2.483s 00:05:54.348 user 0m6.656s 00:05:54.348 sys 0m0.559s 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.348 ************************************ 00:05:54.348 END TEST locking_overlapped_coremask 00:05:54.348 ************************************ 00:05:54.348 22:32:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.348 22:32:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.348 22:32:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.348 22:32:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.348 22:32:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.348 ************************************ 00:05:54.348 START TEST locking_overlapped_coremask_via_rpc 00:05:54.348 ************************************ 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60962 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60962 /var/tmp/spdk.sock 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60962 ']' 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.348 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.349 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.349 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.349 22:32:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.607 [2024-07-15 22:32:12.217980] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:54.607 [2024-07-15 22:32:12.218106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60962 ] 00:05:54.607 [2024-07-15 22:32:12.356604] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.607 [2024-07-15 22:32:12.356677] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.866 [2024-07-15 22:32:12.505570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.866 [2024-07-15 22:32:12.505703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.866 [2024-07-15 22:32:12.505712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.866 [2024-07-15 22:32:12.579098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60980 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60980 /var/tmp/spdk2.sock 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60980 ']' 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.433 22:32:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.433 [2024-07-15 22:32:13.221579] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:55.433 [2024-07-15 22:32:13.221677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60980 ] 00:05:55.691 [2024-07-15 22:32:13.365674] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.691 [2024-07-15 22:32:13.365764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.948 [2024-07-15 22:32:13.622635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.948 [2024-07-15 22:32:13.622803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.948 [2024-07-15 22:32:13.622806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.948 [2024-07-15 22:32:13.734842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.514 [2024-07-15 22:32:14.226063] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60962 has claimed it. 00:05:56.514 request: 00:05:56.514 { 00:05:56.514 "method": "framework_enable_cpumask_locks", 00:05:56.514 "req_id": 1 00:05:56.514 } 00:05:56.514 Got JSON-RPC error response 00:05:56.514 response: 00:05:56.514 { 00:05:56.514 "code": -32603, 00:05:56.514 "message": "Failed to claim CPU core: 2" 00:05:56.514 } 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60962 /var/tmp/spdk.sock 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60962 ']' 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.514 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60980 /var/tmp/spdk2.sock 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60980 ']' 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.773 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.045 00:05:57.045 real 0m2.568s 00:05:57.045 user 0m1.304s 00:05:57.045 sys 0m0.184s 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.045 ************************************ 00:05:57.045 END TEST locking_overlapped_coremask_via_rpc 00:05:57.045 ************************************ 00:05:57.045 22:32:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:57.046 22:32:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.046 22:32:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60962 ]] 00:05:57.046 22:32:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60962 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60962 ']' 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60962 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60962 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.046 killing process with pid 60962 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60962' 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60962 00:05:57.046 22:32:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60962 00:05:57.625 22:32:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60980 ]] 00:05:57.625 22:32:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60980 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60980 ']' 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60980 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60980 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:57.625 killing process with pid 60980 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60980' 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60980 00:05:57.625 22:32:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60980 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60962 ]] 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60962 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60962 ']' 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60962 00:05:58.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60962) - No such process 00:05:58.191 Process with pid 60962 is not found 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60962 is not found' 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60980 ]] 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60980 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60980 ']' 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60980 00:05:58.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60980) - No such process 00:05:58.191 Process with pid 60980 is not found 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60980 is not found' 00:05:58.191 22:32:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.191 00:05:58.191 real 0m22.500s 00:05:58.191 user 0m38.215s 00:05:58.191 sys 0m6.295s 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.191 ************************************ 00:05:58.191 END TEST cpu_locks 00:05:58.191 22:32:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.191 ************************************ 00:05:58.191 22:32:15 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.191 00:05:58.191 real 0m51.732s 00:05:58.191 user 1m38.258s 00:05:58.191 sys 0m10.515s 00:05:58.191 22:32:15 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.191 ************************************ 00:05:58.191 END TEST event 00:05:58.191 22:32:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.191 ************************************ 00:05:58.191 22:32:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.191 22:32:16 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.191 22:32:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.191 22:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.191 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.191 ************************************ 00:05:58.191 START TEST thread 00:05:58.191 ************************************ 00:05:58.191 22:32:16 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.450 * Looking for test storage... 00:05:58.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.450 22:32:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.450 22:32:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.450 22:32:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.450 22:32:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.450 ************************************ 00:05:58.450 START TEST thread_poller_perf 00:05:58.450 ************************************ 00:05:58.450 22:32:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.450 [2024-07-15 22:32:16.129231] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:58.450 [2024-07-15 22:32:16.129338] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61108 ] 00:05:58.450 [2024-07-15 22:32:16.268803] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.708 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.708 [2024-07-15 22:32:16.411405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.085 ====================================== 00:06:00.085 busy:2210317955 (cyc) 00:06:00.085 total_run_count: 310000 00:06:00.085 tsc_hz: 2200000000 (cyc) 00:06:00.085 ====================================== 00:06:00.085 poller_cost: 7130 (cyc), 3240 (nsec) 00:06:00.085 00:06:00.085 real 0m1.428s 00:06:00.085 user 0m1.245s 00:06:00.085 sys 0m0.075s 00:06:00.085 22:32:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.085 22:32:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.085 ************************************ 00:06:00.085 END TEST thread_poller_perf 00:06:00.085 ************************************ 00:06:00.085 22:32:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:00.085 22:32:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.085 22:32:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:00.085 22:32:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.085 22:32:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.085 ************************************ 00:06:00.085 START TEST thread_poller_perf 00:06:00.085 ************************************ 00:06:00.085 22:32:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.085 [2024-07-15 22:32:17.607775] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:00.085 [2024-07-15 22:32:17.607918] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:06:00.085 [2024-07-15 22:32:17.740703] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.085 [2024-07-15 22:32:17.893617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.085 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.459 ====================================== 00:06:01.459 busy:2202123349 (cyc) 00:06:01.459 total_run_count: 4088000 00:06:01.459 tsc_hz: 2200000000 (cyc) 00:06:01.459 ====================================== 00:06:01.459 poller_cost: 538 (cyc), 244 (nsec) 00:06:01.459 00:06:01.460 real 0m1.425s 00:06:01.460 user 0m1.250s 00:06:01.460 sys 0m0.067s 00:06:01.460 22:32:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.460 22:32:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.460 ************************************ 00:06:01.460 END TEST thread_poller_perf 00:06:01.460 ************************************ 00:06:01.460 22:32:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:01.460 22:32:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.460 ************************************ 00:06:01.460 END TEST thread 00:06:01.460 ************************************ 00:06:01.460 00:06:01.460 real 0m3.053s 00:06:01.460 user 0m2.566s 00:06:01.460 sys 0m0.267s 00:06:01.460 22:32:19 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.460 22:32:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.460 22:32:19 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.460 22:32:19 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:01.460 22:32:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.460 22:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.460 22:32:19 -- common/autotest_common.sh@10 -- # set +x 00:06:01.460 ************************************ 00:06:01.460 START TEST accel 00:06:01.460 ************************************ 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:01.460 * Looking for test storage... 00:06:01.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:01.460 22:32:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:01.460 22:32:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:01.460 22:32:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.460 22:32:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61218 00:06:01.460 22:32:19 accel -- accel/accel.sh@63 -- # waitforlisten 61218 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@829 -- # '[' -z 61218 ']' 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.460 22:32:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.460 22:32:19 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:01.460 22:32:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:01.460 22:32:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.460 22:32:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.460 22:32:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.460 22:32:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.460 22:32:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.460 22:32:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:01.460 22:32:19 accel -- accel/accel.sh@41 -- # jq -r . 00:06:01.460 [2024-07-15 22:32:19.274952] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:01.460 [2024-07-15 22:32:19.275072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61218 ] 00:06:01.717 [2024-07-15 22:32:19.416178] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.975 [2024-07-15 22:32:19.566859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.975 [2024-07-15 22:32:19.642128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.542 22:32:20 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.542 22:32:20 accel -- common/autotest_common.sh@862 -- # return 0 00:06:02.542 22:32:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:02.542 22:32:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:02.542 22:32:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:02.542 22:32:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:02.542 22:32:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:02.542 22:32:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:02.542 22:32:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.542 22:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.542 22:32:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:02.542 22:32:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.542 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.542 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.542 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.543 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.543 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.543 22:32:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.543 22:32:20 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.543 22:32:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.543 22:32:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.543 22:32:20 accel -- accel/accel.sh@75 -- # killprocess 61218 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@948 -- # '[' -z 61218 ']' 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@952 -- # kill -0 61218 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@953 -- # uname 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61218 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.543 killing process with pid 61218 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61218' 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@967 -- # kill 61218 00:06:02.543 22:32:20 accel -- common/autotest_common.sh@972 -- # wait 61218 00:06:03.144 22:32:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:03.144 22:32:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.144 22:32:20 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:03.144 22:32:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:03.144 22:32:20 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.144 22:32:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.144 22:32:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.144 22:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.144 ************************************ 00:06:03.144 START TEST accel_missing_filename 00:06:03.144 ************************************ 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.144 22:32:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:03.144 22:32:20 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:03.402 [2024-07-15 22:32:20.981345] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:03.402 [2024-07-15 22:32:20.981449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:06:03.402 [2024-07-15 22:32:21.122038] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.660 [2024-07-15 22:32:21.273353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.660 [2024-07-15 22:32:21.348648] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.660 [2024-07-15 22:32:21.460796] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:03.918 A filename is required. 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.918 00:06:03.918 real 0m0.634s 00:06:03.918 user 0m0.417s 00:06:03.918 sys 0m0.148s 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.918 22:32:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:03.918 ************************************ 00:06:03.918 END TEST accel_missing_filename 00:06:03.918 ************************************ 00:06:03.918 22:32:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.918 22:32:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:03.918 22:32:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:03.918 22:32:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.918 22:32:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.918 ************************************ 00:06:03.918 START TEST accel_compress_verify 00:06:03.918 ************************************ 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.918 22:32:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:03.918 22:32:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:03.918 [2024-07-15 22:32:21.668909] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:03.918 [2024-07-15 22:32:21.669027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:06:04.176 [2024-07-15 22:32:21.806394] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.176 [2024-07-15 22:32:21.957263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.434 [2024-07-15 22:32:22.033945] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.434 [2024-07-15 22:32:22.147013] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:04.434 00:06:04.434 Compression does not support the verify option, aborting. 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.692 00:06:04.692 real 0m0.629s 00:06:04.692 user 0m0.431s 00:06:04.692 sys 0m0.144s 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.692 22:32:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:04.692 ************************************ 00:06:04.692 END TEST accel_compress_verify 00:06:04.692 ************************************ 00:06:04.692 22:32:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.692 22:32:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:04.692 22:32:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.692 22:32:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.692 22:32:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.692 ************************************ 00:06:04.692 START TEST accel_wrong_workload 00:06:04.692 ************************************ 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.692 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:04.693 22:32:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:04.693 Unsupported workload type: foobar 00:06:04.693 [2024-07-15 22:32:22.348047] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:04.693 accel_perf options: 00:06:04.693 [-h help message] 00:06:04.693 [-q queue depth per core] 00:06:04.693 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.693 [-T number of threads per core 00:06:04.693 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.693 [-t time in seconds] 00:06:04.693 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.693 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:04.693 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.693 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.693 [-S for crc32c workload, use this seed value (default 0) 00:06:04.693 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.693 [-f for fill workload, use this BYTE value (default 255) 00:06:04.693 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.693 [-y verify result if this switch is on] 00:06:04.693 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.693 Can be used to spread operations across a wider range of memory. 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.693 00:06:04.693 real 0m0.029s 00:06:04.693 user 0m0.018s 00:06:04.693 sys 0m0.011s 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.693 22:32:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 ************************************ 00:06:04.693 END TEST accel_wrong_workload 00:06:04.693 ************************************ 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.693 22:32:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 ************************************ 00:06:04.693 START TEST accel_negative_buffers 00:06:04.693 ************************************ 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:04.693 22:32:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:04.693 -x option must be non-negative. 00:06:04.693 [2024-07-15 22:32:22.431781] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:04.693 accel_perf options: 00:06:04.693 [-h help message] 00:06:04.693 [-q queue depth per core] 00:06:04.693 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.693 [-T number of threads per core 00:06:04.693 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.693 [-t time in seconds] 00:06:04.693 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.693 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:04.693 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.693 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.693 [-S for crc32c workload, use this seed value (default 0) 00:06:04.693 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.693 [-f for fill workload, use this BYTE value (default 255) 00:06:04.693 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.693 [-y verify result if this switch is on] 00:06:04.693 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.693 Can be used to spread operations across a wider range of memory. 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.693 00:06:04.693 real 0m0.036s 00:06:04.693 user 0m0.015s 00:06:04.693 sys 0m0.020s 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.693 22:32:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 ************************************ 00:06:04.693 END TEST accel_negative_buffers 00:06:04.693 ************************************ 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.693 22:32:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.693 22:32:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 ************************************ 00:06:04.693 START TEST accel_crc32c 00:06:04.693 ************************************ 00:06:04.693 22:32:22 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:04.693 22:32:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:04.693 [2024-07-15 22:32:22.515785] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:04.693 [2024-07-15 22:32:22.515893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:06:04.952 [2024-07-15 22:32:22.651100] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.210 [2024-07-15 22:32:22.809601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.210 22:32:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.211 22:32:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.211 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.211 22:32:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.588 22:32:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.588 00:06:06.588 real 0m1.646s 00:06:06.588 user 0m1.406s 00:06:06.588 sys 0m0.145s 00:06:06.588 22:32:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.588 ************************************ 00:06:06.588 END TEST accel_crc32c 00:06:06.588 22:32:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:06.588 ************************************ 00:06:06.588 22:32:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.588 22:32:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:06.588 22:32:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:06.588 22:32:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.588 22:32:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.588 ************************************ 00:06:06.588 START TEST accel_crc32c_C2 00:06:06.588 ************************************ 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.588 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:06.588 [2024-07-15 22:32:24.210014] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:06.588 [2024-07-15 22:32:24.210901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:06:06.588 [2024-07-15 22:32:24.352103] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.847 [2024-07-15 22:32:24.503770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:06.847 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.848 22:32:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.225 00:06:08.225 real 0m1.647s 00:06:08.225 user 0m1.398s 00:06:08.225 sys 0m0.155s 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.225 ************************************ 00:06:08.225 END TEST accel_crc32c_C2 00:06:08.225 ************************************ 00:06:08.225 22:32:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:08.225 22:32:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.225 22:32:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:08.225 22:32:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.225 22:32:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.225 22:32:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.225 ************************************ 00:06:08.225 START TEST accel_copy 00:06:08.225 ************************************ 00:06:08.225 22:32:25 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:08.225 22:32:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:08.225 [2024-07-15 22:32:25.904446] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:08.225 [2024-07-15 22:32:25.904537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61434 ] 00:06:08.225 [2024-07-15 22:32:26.039597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.528 [2024-07-15 22:32:26.197364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.528 22:32:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:09.928 ************************************ 00:06:09.928 END TEST accel_copy 00:06:09.928 ************************************ 00:06:09.928 22:32:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.928 00:06:09.928 real 0m1.632s 00:06:09.928 user 0m1.391s 00:06:09.928 sys 0m0.147s 00:06:09.928 22:32:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.928 22:32:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.928 22:32:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.928 22:32:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.928 22:32:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:09.928 22:32:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.928 22:32:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.928 ************************************ 00:06:09.928 START TEST accel_fill 00:06:09.928 ************************************ 00:06:09.928 22:32:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:09.928 22:32:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:09.928 [2024-07-15 22:32:27.583337] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:09.928 [2024-07-15 22:32:27.583424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61474 ] 00:06:09.928 [2024-07-15 22:32:27.719748] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.187 [2024-07-15 22:32:27.868563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.187 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.188 22:32:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:11.565 22:32:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.565 00:06:11.565 real 0m1.617s 00:06:11.565 user 0m0.017s 00:06:11.565 sys 0m0.000s 00:06:11.565 22:32:29 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.565 22:32:29 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:11.565 ************************************ 00:06:11.565 END TEST accel_fill 00:06:11.565 ************************************ 00:06:11.565 22:32:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.565 22:32:29 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.565 22:32:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.565 22:32:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.565 22:32:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.565 ************************************ 00:06:11.565 START TEST accel_copy_crc32c 00:06:11.565 ************************************ 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:11.565 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:11.566 [2024-07-15 22:32:29.251416] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:11.566 [2024-07-15 22:32:29.251503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:06:11.566 [2024-07-15 22:32:29.382857] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.824 [2024-07-15 22:32:29.531083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.824 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 22:32:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.200 00:06:13.200 real 0m1.614s 00:06:13.200 user 0m1.377s 00:06:13.200 sys 0m0.143s 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.200 22:32:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:13.200 ************************************ 00:06:13.200 END TEST accel_copy_crc32c 00:06:13.200 ************************************ 00:06:13.200 22:32:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.200 22:32:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.200 22:32:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:13.200 22:32:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.200 22:32:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.200 ************************************ 00:06:13.200 START TEST accel_copy_crc32c_C2 00:06:13.200 ************************************ 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.200 22:32:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:13.200 [2024-07-15 22:32:30.912590] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:13.200 [2024-07-15 22:32:30.912683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ] 00:06:13.459 [2024-07-15 22:32:31.044440] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.459 [2024-07-15 22:32:31.191218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.459 22:32:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.832 00:06:14.832 real 0m1.611s 00:06:14.832 user 0m1.371s 00:06:14.832 sys 0m0.147s 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.832 22:32:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 ************************************ 00:06:14.832 END TEST accel_copy_crc32c_C2 00:06:14.832 ************************************ 00:06:14.832 22:32:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.832 22:32:32 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.832 22:32:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.832 22:32:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.833 22:32:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.833 ************************************ 00:06:14.833 START TEST accel_dualcast 00:06:14.833 ************************************ 00:06:14.833 22:32:32 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:14.833 22:32:32 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:14.833 [2024-07-15 22:32:32.578058] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:14.833 [2024-07-15 22:32:32.578172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61572 ] 00:06:15.091 [2024-07-15 22:32:32.716764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.091 [2024-07-15 22:32:32.861811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.348 22:32:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.719 22:32:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.719 00:06:16.719 real 0m1.621s 00:06:16.719 user 0m1.378s 00:06:16.719 sys 0m0.147s 00:06:16.719 22:32:34 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.720 22:32:34 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:16.720 ************************************ 00:06:16.720 END TEST accel_dualcast 00:06:16.720 ************************************ 00:06:16.720 22:32:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.720 22:32:34 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.720 22:32:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.720 22:32:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.720 22:32:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.720 ************************************ 00:06:16.720 START TEST accel_compare 00:06:16.720 ************************************ 00:06:16.720 22:32:34 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:16.720 22:32:34 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:16.720 [2024-07-15 22:32:34.247143] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:16.720 [2024-07-15 22:32:34.247313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:06:16.720 [2024-07-15 22:32:34.391177] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.978 [2024-07-15 22:32:34.554205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.978 22:32:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:18.367 22:32:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.367 00:06:18.367 real 0m1.656s 00:06:18.367 user 0m1.402s 00:06:18.367 sys 0m0.154s 00:06:18.367 22:32:35 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.367 ************************************ 00:06:18.367 END TEST accel_compare 00:06:18.367 ************************************ 00:06:18.367 22:32:35 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:18.367 22:32:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.367 22:32:35 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.367 22:32:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.367 22:32:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.367 22:32:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.367 ************************************ 00:06:18.367 START TEST accel_xor 00:06:18.367 ************************************ 00:06:18.367 22:32:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:18.367 22:32:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:18.367 [2024-07-15 22:32:35.952670] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:18.367 [2024-07-15 22:32:35.952777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61652 ] 00:06:18.367 [2024-07-15 22:32:36.093155] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.654 [2024-07-15 22:32:36.269925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.654 22:32:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.026 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.027 00:06:20.027 real 0m1.670s 00:06:20.027 user 0m1.420s 00:06:20.027 sys 0m0.152s 00:06:20.027 22:32:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.027 22:32:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:20.027 ************************************ 00:06:20.027 END TEST accel_xor 00:06:20.027 ************************************ 00:06:20.027 22:32:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.027 22:32:37 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:20.027 22:32:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.027 22:32:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.027 22:32:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.027 ************************************ 00:06:20.027 START TEST accel_xor 00:06:20.027 ************************************ 00:06:20.027 22:32:37 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:20.027 22:32:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:20.027 [2024-07-15 22:32:37.673261] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:20.027 [2024-07-15 22:32:37.673409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:06:20.027 [2024-07-15 22:32:37.818103] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.284 [2024-07-15 22:32:37.970500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.284 22:32:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.663 22:32:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.663 00:06:21.663 real 0m1.637s 00:06:21.663 user 0m1.391s 00:06:21.663 sys 0m0.151s 00:06:21.663 22:32:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.663 22:32:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:21.663 ************************************ 00:06:21.663 END TEST accel_xor 00:06:21.663 ************************************ 00:06:21.663 22:32:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.663 22:32:39 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:21.663 22:32:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:21.663 22:32:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.663 22:32:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.663 ************************************ 00:06:21.663 START TEST accel_dif_verify 00:06:21.663 ************************************ 00:06:21.663 22:32:39 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:21.663 22:32:39 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:21.663 [2024-07-15 22:32:39.360301] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:21.663 [2024-07-15 22:32:39.360414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61721 ] 00:06:21.663 [2024-07-15 22:32:39.492970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.921 [2024-07-15 22:32:39.643177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.921 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:21.922 22:32:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.328 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:23.329 22:32:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.329 00:06:23.329 real 0m1.627s 00:06:23.329 user 0m1.384s 00:06:23.329 sys 0m0.152s 00:06:23.329 22:32:40 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.329 22:32:40 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 END TEST accel_dif_verify 00:06:23.329 ************************************ 00:06:23.329 22:32:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.329 22:32:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:23.329 22:32:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.329 22:32:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.329 22:32:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 START TEST accel_dif_generate 00:06:23.329 ************************************ 00:06:23.329 22:32:41 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:23.329 22:32:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:23.329 [2024-07-15 22:32:41.036775] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:23.329 [2024-07-15 22:32:41.036880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61758 ] 00:06:23.587 [2024-07-15 22:32:41.174287] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.587 [2024-07-15 22:32:41.335952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.587 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.845 22:32:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:25.218 22:32:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.218 00:06:25.218 real 0m1.636s 00:06:25.218 user 0m1.396s 00:06:25.218 sys 0m0.146s 00:06:25.218 22:32:42 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.218 ************************************ 00:06:25.218 END TEST accel_dif_generate 00:06:25.218 ************************************ 00:06:25.219 22:32:42 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:25.219 22:32:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.219 22:32:42 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:25.219 22:32:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.219 22:32:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.219 22:32:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.219 ************************************ 00:06:25.219 START TEST accel_dif_generate_copy 00:06:25.219 ************************************ 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.219 22:32:42 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.219 [2024-07-15 22:32:42.731082] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:25.219 [2024-07-15 22:32:42.731180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:06:25.219 [2024-07-15 22:32:42.870509] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.219 [2024-07-15 22:32:42.994535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.477 22:32:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.411 00:06:26.411 real 0m1.523s 00:06:26.411 user 0m1.321s 00:06:26.411 sys 0m0.108s 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.411 ************************************ 00:06:26.411 END TEST accel_dif_generate_copy 00:06:26.411 ************************************ 00:06:26.411 22:32:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.670 22:32:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.670 22:32:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:26.670 22:32:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.670 22:32:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.670 22:32:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.670 22:32:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.670 ************************************ 00:06:26.670 START TEST accel_comp 00:06:26.670 ************************************ 00:06:26.670 22:32:44 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:26.670 22:32:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:26.670 [2024-07-15 22:32:44.298180] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:26.670 [2024-07-15 22:32:44.298309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61833 ] 00:06:26.670 [2024-07-15 22:32:44.436129] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.928 [2024-07-15 22:32:44.554961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:32:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:28.301 22:32:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.301 00:06:28.301 real 0m1.511s 00:06:28.301 user 0m1.309s 00:06:28.301 sys 0m0.103s 00:06:28.301 22:32:45 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.301 22:32:45 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:28.301 ************************************ 00:06:28.301 END TEST accel_comp 00:06:28.301 ************************************ 00:06:28.301 22:32:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.301 22:32:45 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.301 22:32:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:28.301 22:32:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.301 22:32:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.301 ************************************ 00:06:28.301 START TEST accel_decomp 00:06:28.301 ************************************ 00:06:28.301 22:32:45 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:28.301 22:32:45 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:28.301 [2024-07-15 22:32:45.864188] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:28.301 [2024-07-15 22:32:45.864308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:06:28.301 [2024-07-15 22:32:46.003519] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.301 [2024-07-15 22:32:46.122966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.560 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.561 22:32:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.935 22:32:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.935 00:06:29.935 real 0m1.539s 00:06:29.935 user 0m1.327s 00:06:29.935 sys 0m0.120s 00:06:29.935 22:32:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.935 22:32:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:29.935 ************************************ 00:06:29.935 END TEST accel_decomp 00:06:29.935 ************************************ 00:06:29.935 22:32:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.935 22:32:47 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.935 22:32:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:29.935 22:32:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.935 22:32:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.935 ************************************ 00:06:29.935 START TEST accel_decomp_full 00:06:29.935 ************************************ 00:06:29.935 22:32:47 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:29.935 [2024-07-15 22:32:47.441112] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:29.935 [2024-07-15 22:32:47.441193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:06:29.935 [2024-07-15 22:32:47.576464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.935 [2024-07-15 22:32:47.695097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.935 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.936 22:32:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.339 22:32:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.339 00:06:31.339 real 0m1.505s 00:06:31.339 user 0m0.016s 00:06:31.339 sys 0m0.002s 00:06:31.339 ************************************ 00:06:31.339 END TEST accel_decomp_full 00:06:31.339 ************************************ 00:06:31.339 22:32:48 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.339 22:32:48 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 22:32:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.339 22:32:48 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:31.339 22:32:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.339 22:32:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.339 22:32:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 ************************************ 00:06:31.339 START TEST accel_decomp_mcore 00:06:31.339 ************************************ 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:31.339 22:32:48 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:31.339 [2024-07-15 22:32:48.999145] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:31.339 [2024-07-15 22:32:48.999242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:06:31.339 [2024-07-15 22:32:49.136675] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.599 [2024-07-15 22:32:49.260313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.599 [2024-07-15 22:32:49.260453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.599 [2024-07-15 22:32:49.260620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.599 [2024-07-15 22:32:49.260700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.599 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.600 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.600 22:32:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.976 00:06:32.976 real 0m1.528s 00:06:32.976 user 0m4.692s 00:06:32.976 sys 0m0.135s 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.976 22:32:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:32.976 ************************************ 00:06:32.976 END TEST accel_decomp_mcore 00:06:32.976 ************************************ 00:06:32.977 22:32:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.977 22:32:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.977 22:32:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.977 22:32:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.977 22:32:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.977 ************************************ 00:06:32.977 START TEST accel_decomp_full_mcore 00:06:32.977 ************************************ 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:32.977 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:32.977 [2024-07-15 22:32:50.571641] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:32.977 [2024-07-15 22:32:50.571730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61972 ] 00:06:32.977 [2024-07-15 22:32:50.705722] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.235 [2024-07-15 22:32:50.826738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.235 [2024-07-15 22:32:50.826903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.235 [2024-07-15 22:32:50.827011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.235 [2024-07-15 22:32:50.827138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.235 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.236 22:32:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.610 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.611 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.611 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.611 22:32:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.611 00:06:34.611 real 0m1.533s 00:06:34.611 user 0m4.742s 00:06:34.611 sys 0m0.131s 00:06:34.611 22:32:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.611 ************************************ 00:06:34.611 END TEST accel_decomp_full_mcore 00:06:34.611 ************************************ 00:06:34.611 22:32:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:34.611 22:32:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.611 22:32:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.611 22:32:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.611 22:32:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.611 22:32:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.611 ************************************ 00:06:34.611 START TEST accel_decomp_mthread 00:06:34.611 ************************************ 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:34.611 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:34.611 [2024-07-15 22:32:52.154321] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:34.611 [2024-07-15 22:32:52.154418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:06:34.611 [2024-07-15 22:32:52.287042] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.611 [2024-07-15 22:32:52.405548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.870 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.871 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.871 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.871 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.871 22:32:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.826 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.826 22:32:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.826 00:06:35.826 real 0m1.507s 00:06:35.826 user 0m1.297s 00:06:35.826 sys 0m0.115s 00:06:35.826 22:32:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.826 22:32:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:35.826 ************************************ 00:06:35.826 END TEST accel_decomp_mthread 00:06:35.826 ************************************ 00:06:36.105 22:32:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.105 22:32:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.105 22:32:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:36.105 22:32:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.105 22:32:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.105 ************************************ 00:06:36.105 START TEST accel_decomp_full_mthread 00:06:36.105 ************************************ 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.105 22:32:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.105 [2024-07-15 22:32:53.704767] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:36.105 [2024-07-15 22:32:53.704887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62044 ] 00:06:36.105 [2024-07-15 22:32:53.839127] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.364 [2024-07-15 22:32:53.956332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.364 22:32:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.742 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.743 ************************************ 00:06:37.743 END TEST accel_decomp_full_mthread 00:06:37.743 ************************************ 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.743 00:06:37.743 real 0m1.552s 00:06:37.743 user 0m1.349s 00:06:37.743 sys 0m0.107s 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.743 22:32:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:37.743 22:32:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.743 22:32:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.743 22:32:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.743 22:32:55 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:37.743 22:32:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:37.743 22:32:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.743 22:32:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.743 22:32:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.743 22:32:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.743 22:32:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.743 22:32:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.743 22:32:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.743 22:32:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.743 22:32:55 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.743 ************************************ 00:06:37.743 START TEST accel_dif_functional_tests 00:06:37.743 ************************************ 00:06:37.743 22:32:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.743 [2024-07-15 22:32:55.334628] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:37.743 [2024-07-15 22:32:55.334726] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:06:37.743 [2024-07-15 22:32:55.466785] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.002 [2024-07-15 22:32:55.596581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.002 [2024-07-15 22:32:55.596741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.002 [2024-07-15 22:32:55.596758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.002 [2024-07-15 22:32:55.677708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.002 00:06:38.002 00:06:38.002 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.002 http://cunit.sourceforge.net/ 00:06:38.002 00:06:38.002 00:06:38.002 Suite: accel_dif 00:06:38.002 Test: verify: DIF generated, GUARD check ...passed 00:06:38.002 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.002 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.002 Test: verify: DIF not generated, GUARD check ...passed 00:06:38.002 Test: verify: DIF not generated, APPTAG check ...passed 00:06:38.002 Test: verify: DIF not generated, REFTAG check ...passed 00:06:38.002 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.002 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:38.002 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.002 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.002 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.002 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:38.002 Test: verify copy: DIF generated, GUARD check ...passed 00:06:38.002 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:38.002 Test: verify copy: DIF generated, REFTAG check ...[2024-07-15 22:32:55.729338] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.002 [2024-07-15 22:32:55.729427] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.002 [2024-07-15 22:32:55.729461] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.002 [2024-07-15 22:32:55.729533] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.002 [2024-07-15 22:32:55.729690] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.002 passed 00:06:38.002 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:38.002 Test: verify copy: DIF not generated, APPTAG check ...passed 00:06:38.002 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:38.002 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.002 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.002 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.002 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.002 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.002 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.002 Test: generate copy: iovecs-len validate ...[2024-07-15 22:32:55.729878] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.002 [2024-07-15 22:32:55.729914] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.002 [2024-07-15 22:32:55.729947] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.002 passed 00:06:38.002 Test: generate copy: buffer alignment validate ...passed 00:06:38.002 00:06:38.002 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.002 suites 1 1 n/a 0 0 00:06:38.002 tests 26 26 26 0 0 00:06:38.002 asserts 115 115 115 0 n/a 00:06:38.002 00:06:38.002 Elapsed time = 0.003 seconds 00:06:38.002 [2024-07-15 22:32:55.730241] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.260 00:06:38.260 real 0m0.755s 00:06:38.260 user 0m1.123s 00:06:38.260 sys 0m0.187s 00:06:38.260 ************************************ 00:06:38.260 END TEST accel_dif_functional_tests 00:06:38.260 ************************************ 00:06:38.260 22:32:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.260 22:32:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:38.260 22:32:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.260 00:06:38.260 real 0m36.964s 00:06:38.260 user 0m38.302s 00:06:38.260 sys 0m4.505s 00:06:38.260 22:32:56 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.260 ************************************ 00:06:38.260 END TEST accel 00:06:38.260 ************************************ 00:06:38.260 22:32:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 22:32:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.518 22:32:56 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.518 22:32:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.518 22:32:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.518 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 ************************************ 00:06:38.518 START TEST accel_rpc 00:06:38.518 ************************************ 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.518 * Looking for test storage... 00:06:38.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:38.518 22:32:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.518 22:32:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62156 00:06:38.518 22:32:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62156 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62156 ']' 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.518 22:32:56 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.519 22:32:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.519 22:32:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 [2024-07-15 22:32:56.276134] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:38.519 [2024-07-15 22:32:56.276254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62156 ] 00:06:38.777 [2024-07-15 22:32:56.412928] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.777 [2024-07-15 22:32:56.564605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.713 22:32:57 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.713 22:32:57 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.713 22:32:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:39.713 22:32:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:39.713 22:32:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:39.713 22:32:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:39.713 22:32:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:39.713 22:32:57 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.713 22:32:57 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.713 22:32:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 ************************************ 00:06:39.713 START TEST accel_assign_opcode 00:06:39.713 ************************************ 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 [2024-07-15 22:32:57.233305] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 [2024-07-15 22:32:57.241282] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.713 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 [2024-07-15 22:32:57.324498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.972 software 00:06:39.972 00:06:39.972 real 0m0.378s 00:06:39.972 user 0m0.055s 00:06:39.972 sys 0m0.006s 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.972 22:32:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 END TEST accel_assign_opcode 00:06:39.972 ************************************ 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:39.972 22:32:57 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62156 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62156 ']' 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62156 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62156 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62156' 00:06:39.972 killing process with pid 62156 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@967 -- # kill 62156 00:06:39.972 22:32:57 accel_rpc -- common/autotest_common.sh@972 -- # wait 62156 00:06:40.541 00:06:40.541 real 0m2.101s 00:06:40.541 user 0m2.081s 00:06:40.541 sys 0m0.518s 00:06:40.541 22:32:58 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.541 22:32:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.541 ************************************ 00:06:40.541 END TEST accel_rpc 00:06:40.541 ************************************ 00:06:40.541 22:32:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.541 22:32:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.541 22:32:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.541 22:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.541 22:32:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.541 ************************************ 00:06:40.541 START TEST app_cmdline 00:06:40.541 ************************************ 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.541 * Looking for test storage... 00:06:40.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.541 22:32:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.541 22:32:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62249 00:06:40.541 22:32:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.541 22:32:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62249 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62249 ']' 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.541 22:32:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.848 [2024-07-15 22:32:58.418542] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:40.848 [2024-07-15 22:32:58.418631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62249 ] 00:06:40.848 [2024-07-15 22:32:58.554036] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.107 [2024-07-15 22:32:58.702237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.107 [2024-07-15 22:32:58.778804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.675 22:32:59 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.675 22:32:59 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:41.675 22:32:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:41.934 { 00:06:41.934 "version": "SPDK v24.09-pre git sha1 e9e51ebfe", 00:06:41.934 "fields": { 00:06:41.934 "major": 24, 00:06:41.934 "minor": 9, 00:06:41.934 "patch": 0, 00:06:41.934 "suffix": "-pre", 00:06:41.934 "commit": "e9e51ebfe" 00:06:41.934 } 00:06:41.934 } 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.934 22:32:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:41.934 22:32:59 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.194 request: 00:06:42.194 { 00:06:42.194 "method": "env_dpdk_get_mem_stats", 00:06:42.194 "req_id": 1 00:06:42.194 } 00:06:42.194 Got JSON-RPC error response 00:06:42.194 response: 00:06:42.194 { 00:06:42.194 "code": -32601, 00:06:42.194 "message": "Method not found" 00:06:42.194 } 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.194 22:32:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62249 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62249 ']' 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62249 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62249 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.194 killing process with pid 62249 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62249' 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@967 -- # kill 62249 00:06:42.194 22:32:59 app_cmdline -- common/autotest_common.sh@972 -- # wait 62249 00:06:42.765 00:06:42.765 real 0m2.204s 00:06:42.765 user 0m2.557s 00:06:42.765 sys 0m0.582s 00:06:42.765 22:33:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.765 22:33:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.765 ************************************ 00:06:42.765 END TEST app_cmdline 00:06:42.765 ************************************ 00:06:42.765 22:33:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.765 22:33:00 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:42.765 22:33:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.765 22:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.765 22:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:42.765 ************************************ 00:06:42.765 START TEST version 00:06:42.765 ************************************ 00:06:42.765 22:33:00 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:43.024 * Looking for test storage... 00:06:43.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:43.024 22:33:00 version -- app/version.sh@17 -- # get_header_version major 00:06:43.024 22:33:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # cut -f2 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.024 22:33:00 version -- app/version.sh@17 -- # major=24 00:06:43.024 22:33:00 version -- app/version.sh@18 -- # get_header_version minor 00:06:43.024 22:33:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # cut -f2 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.024 22:33:00 version -- app/version.sh@18 -- # minor=9 00:06:43.024 22:33:00 version -- app/version.sh@19 -- # get_header_version patch 00:06:43.024 22:33:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # cut -f2 00:06:43.024 22:33:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.024 22:33:00 version -- app/version.sh@19 -- # patch=0 00:06:43.024 22:33:00 version -- app/version.sh@20 -- # get_header_version suffix 00:06:43.024 22:33:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:43.025 22:33:00 version -- app/version.sh@14 -- # cut -f2 00:06:43.025 22:33:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.025 22:33:00 version -- app/version.sh@20 -- # suffix=-pre 00:06:43.025 22:33:00 version -- app/version.sh@22 -- # version=24.9 00:06:43.025 22:33:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:43.025 22:33:00 version -- app/version.sh@28 -- # version=24.9rc0 00:06:43.025 22:33:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:43.025 22:33:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:43.025 22:33:00 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:43.025 22:33:00 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:43.025 00:06:43.025 real 0m0.151s 00:06:43.025 user 0m0.077s 00:06:43.025 sys 0m0.107s 00:06:43.025 22:33:00 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.025 ************************************ 00:06:43.025 END TEST version 00:06:43.025 ************************************ 00:06:43.025 22:33:00 version -- common/autotest_common.sh@10 -- # set +x 00:06:43.025 22:33:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.025 22:33:00 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:43.025 22:33:00 -- spdk/autotest.sh@198 -- # uname -s 00:06:43.025 22:33:00 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:43.025 22:33:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:43.025 22:33:00 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:43.025 22:33:00 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:43.025 22:33:00 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:43.025 22:33:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.025 22:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.025 22:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.025 ************************************ 00:06:43.025 START TEST spdk_dd 00:06:43.025 ************************************ 00:06:43.025 22:33:00 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:43.025 * Looking for test storage... 00:06:43.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.025 22:33:00 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.025 22:33:00 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.025 22:33:00 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.025 22:33:00 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.025 22:33:00 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.025 22:33:00 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.025 22:33:00 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.025 22:33:00 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:43.025 22:33:00 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.025 22:33:00 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.595 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.595 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.595 22:33:01 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:43.595 22:33:01 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:43.595 22:33:01 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:43.595 22:33:01 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:43.595 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:43.596 * spdk_dd linked to liburing 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:43.596 22:33:01 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:43.596 22:33:01 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:43.597 22:33:01 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:43.597 22:33:01 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:43.597 22:33:01 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:43.597 22:33:01 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:43.597 22:33:01 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:43.597 22:33:01 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:43.597 22:33:01 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:43.597 22:33:01 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:43.597 22:33:01 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.597 22:33:01 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.597 22:33:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.597 ************************************ 00:06:43.597 START TEST spdk_dd_basic_rw 00:06:43.597 ************************************ 00:06:43.597 22:33:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:43.858 * Looking for test storage... 00:06:43.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:43.858 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:43.859 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:43.859 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 ************************************ 00:06:43.860 START TEST dd_bs_lt_native_bs 00:06:43.860 ************************************ 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.860 22:33:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:44.119 { 00:06:44.119 "subsystems": [ 00:06:44.119 { 00:06:44.119 "subsystem": "bdev", 00:06:44.119 "config": [ 00:06:44.119 { 00:06:44.119 "params": { 00:06:44.119 "trtype": "pcie", 00:06:44.119 "traddr": "0000:00:10.0", 00:06:44.119 "name": "Nvme0" 00:06:44.119 }, 00:06:44.119 "method": "bdev_nvme_attach_controller" 00:06:44.119 }, 00:06:44.119 { 00:06:44.119 "method": "bdev_wait_for_examine" 00:06:44.119 } 00:06:44.119 ] 00:06:44.119 } 00:06:44.119 ] 00:06:44.119 } 00:06:44.119 [2024-07-15 22:33:01.718561] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:44.119 [2024-07-15 22:33:01.718675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62569 ] 00:06:44.119 [2024-07-15 22:33:01.857416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.377 [2024-07-15 22:33:02.006040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.377 [2024-07-15 22:33:02.080513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.377 [2024-07-15 22:33:02.200173] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:44.377 [2024-07-15 22:33:02.200253] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.636 [2024-07-15 22:33:02.377027] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.895 00:06:44.895 real 0m0.865s 00:06:44.895 user 0m0.611s 00:06:44.895 sys 0m0.206s 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:44.895 ************************************ 00:06:44.895 END TEST dd_bs_lt_native_bs 00:06:44.895 ************************************ 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.895 ************************************ 00:06:44.895 START TEST dd_rw 00:06:44.895 ************************************ 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:44.895 22:33:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.827 22:33:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:45.827 22:33:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.827 22:33:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.827 22:33:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.827 { 00:06:45.827 "subsystems": [ 00:06:45.827 { 00:06:45.827 "subsystem": "bdev", 00:06:45.827 "config": [ 00:06:45.827 { 00:06:45.827 "params": { 00:06:45.827 "trtype": "pcie", 00:06:45.827 "traddr": "0000:00:10.0", 00:06:45.827 "name": "Nvme0" 00:06:45.827 }, 00:06:45.827 "method": "bdev_nvme_attach_controller" 00:06:45.827 }, 00:06:45.827 { 00:06:45.827 "method": "bdev_wait_for_examine" 00:06:45.827 } 00:06:45.827 ] 00:06:45.827 } 00:06:45.827 ] 00:06:45.827 } 00:06:45.827 [2024-07-15 22:33:03.409372] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:45.827 [2024-07-15 22:33:03.409469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62611 ] 00:06:45.827 [2024-07-15 22:33:03.542355] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.085 [2024-07-15 22:33:03.690151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.085 [2024-07-15 22:33:03.764005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.652  Copying: 60/60 [kB] (average 29 MBps) 00:06:46.652 00:06:46.653 22:33:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:46.653 22:33:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:46.653 22:33:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.653 22:33:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.653 [2024-07-15 22:33:04.254264] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:46.653 [2024-07-15 22:33:04.254385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62619 ] 00:06:46.653 { 00:06:46.653 "subsystems": [ 00:06:46.653 { 00:06:46.653 "subsystem": "bdev", 00:06:46.653 "config": [ 00:06:46.653 { 00:06:46.653 "params": { 00:06:46.653 "trtype": "pcie", 00:06:46.653 "traddr": "0000:00:10.0", 00:06:46.653 "name": "Nvme0" 00:06:46.653 }, 00:06:46.653 "method": "bdev_nvme_attach_controller" 00:06:46.653 }, 00:06:46.653 { 00:06:46.653 "method": "bdev_wait_for_examine" 00:06:46.653 } 00:06:46.653 ] 00:06:46.653 } 00:06:46.653 ] 00:06:46.653 } 00:06:46.653 [2024-07-15 22:33:04.393367] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.911 [2024-07-15 22:33:04.541922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.911 [2024-07-15 22:33:04.615712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.476  Copying: 60/60 [kB] (average 19 MBps) 00:06:47.476 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.476 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.476 [2024-07-15 22:33:05.109281] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:47.476 [2024-07-15 22:33:05.109384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62640 ] 00:06:47.476 { 00:06:47.476 "subsystems": [ 00:06:47.476 { 00:06:47.476 "subsystem": "bdev", 00:06:47.476 "config": [ 00:06:47.476 { 00:06:47.476 "params": { 00:06:47.476 "trtype": "pcie", 00:06:47.476 "traddr": "0000:00:10.0", 00:06:47.476 "name": "Nvme0" 00:06:47.476 }, 00:06:47.476 "method": "bdev_nvme_attach_controller" 00:06:47.476 }, 00:06:47.477 { 00:06:47.477 "method": "bdev_wait_for_examine" 00:06:47.477 } 00:06:47.477 ] 00:06:47.477 } 00:06:47.477 ] 00:06:47.477 } 00:06:47.477 [2024-07-15 22:33:05.247345] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.734 [2024-07-15 22:33:05.397442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.735 [2024-07-15 22:33:05.471417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.993  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:47.993 00:06:47.993 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:48.252 22:33:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.818 22:33:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:48.818 22:33:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:48.818 22:33:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.818 22:33:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.818 { 00:06:48.818 "subsystems": [ 00:06:48.818 { 00:06:48.818 "subsystem": "bdev", 00:06:48.818 "config": [ 00:06:48.818 { 00:06:48.818 "params": { 00:06:48.818 "trtype": "pcie", 00:06:48.818 "traddr": "0000:00:10.0", 00:06:48.818 "name": "Nvme0" 00:06:48.818 }, 00:06:48.818 "method": "bdev_nvme_attach_controller" 00:06:48.818 }, 00:06:48.818 { 00:06:48.818 "method": "bdev_wait_for_examine" 00:06:48.818 } 00:06:48.818 ] 00:06:48.818 } 00:06:48.818 ] 00:06:48.818 } 00:06:48.818 [2024-07-15 22:33:06.539691] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:48.818 [2024-07-15 22:33:06.539796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62670 ] 00:06:49.076 [2024-07-15 22:33:06.679501] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.076 [2024-07-15 22:33:06.803331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.076 [2024-07-15 22:33:06.861839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.594  Copying: 60/60 [kB] (average 58 MBps) 00:06:49.594 00:06:49.595 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:49.595 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:49.595 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.595 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.595 { 00:06:49.595 "subsystems": [ 00:06:49.595 { 00:06:49.595 "subsystem": "bdev", 00:06:49.595 "config": [ 00:06:49.595 { 00:06:49.595 "params": { 00:06:49.595 "trtype": "pcie", 00:06:49.595 "traddr": "0000:00:10.0", 00:06:49.595 "name": "Nvme0" 00:06:49.595 }, 00:06:49.595 "method": "bdev_nvme_attach_controller" 00:06:49.595 }, 00:06:49.595 { 00:06:49.595 "method": "bdev_wait_for_examine" 00:06:49.595 } 00:06:49.595 ] 00:06:49.595 } 00:06:49.595 ] 00:06:49.595 } 00:06:49.595 [2024-07-15 22:33:07.250597] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:49.595 [2024-07-15 22:33:07.250726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:06:49.595 [2024-07-15 22:33:07.392051] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.853 [2024-07-15 22:33:07.527043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.853 [2024-07-15 22:33:07.580061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.113  Copying: 60/60 [kB] (average 58 MBps) 00:06:50.113 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.113 22:33:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.401 [2024-07-15 22:33:07.972734] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:50.401 [2024-07-15 22:33:07.972863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62699 ] 00:06:50.401 { 00:06:50.401 "subsystems": [ 00:06:50.401 { 00:06:50.401 "subsystem": "bdev", 00:06:50.401 "config": [ 00:06:50.401 { 00:06:50.401 "params": { 00:06:50.401 "trtype": "pcie", 00:06:50.401 "traddr": "0000:00:10.0", 00:06:50.401 "name": "Nvme0" 00:06:50.401 }, 00:06:50.401 "method": "bdev_nvme_attach_controller" 00:06:50.401 }, 00:06:50.401 { 00:06:50.401 "method": "bdev_wait_for_examine" 00:06:50.401 } 00:06:50.401 ] 00:06:50.401 } 00:06:50.401 ] 00:06:50.401 } 00:06:50.401 [2024-07-15 22:33:08.107282] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.401 [2024-07-15 22:33:08.224123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.659 [2024-07-15 22:33:08.276843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.917  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:50.917 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:50.917 22:33:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 22:33:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:51.485 22:33:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:51.485 22:33:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.485 22:33:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 { 00:06:51.485 "subsystems": [ 00:06:51.485 { 00:06:51.485 "subsystem": "bdev", 00:06:51.485 "config": [ 00:06:51.485 { 00:06:51.485 "params": { 00:06:51.485 "trtype": "pcie", 00:06:51.485 "traddr": "0000:00:10.0", 00:06:51.485 "name": "Nvme0" 00:06:51.485 }, 00:06:51.485 "method": "bdev_nvme_attach_controller" 00:06:51.485 }, 00:06:51.485 { 00:06:51.485 "method": "bdev_wait_for_examine" 00:06:51.485 } 00:06:51.485 ] 00:06:51.485 } 00:06:51.485 ] 00:06:51.485 } 00:06:51.744 [2024-07-15 22:33:09.322521] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:51.744 [2024-07-15 22:33:09.322627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62718 ] 00:06:51.744 [2024-07-15 22:33:09.463557] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.002 [2024-07-15 22:33:09.612027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.002 [2024-07-15 22:33:09.689142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.570  Copying: 56/56 [kB] (average 54 MBps) 00:06:52.570 00:06:52.570 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:52.570 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.570 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.570 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.570 { 00:06:52.570 "subsystems": [ 00:06:52.570 { 00:06:52.570 "subsystem": "bdev", 00:06:52.570 "config": [ 00:06:52.570 { 00:06:52.570 "params": { 00:06:52.570 "trtype": "pcie", 00:06:52.570 "traddr": "0000:00:10.0", 00:06:52.570 "name": "Nvme0" 00:06:52.570 }, 00:06:52.570 "method": "bdev_nvme_attach_controller" 00:06:52.570 }, 00:06:52.570 { 00:06:52.571 "method": "bdev_wait_for_examine" 00:06:52.571 } 00:06:52.571 ] 00:06:52.571 } 00:06:52.571 ] 00:06:52.571 } 00:06:52.571 [2024-07-15 22:33:10.178351] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:52.571 [2024-07-15 22:33:10.178459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:06:52.571 [2024-07-15 22:33:10.313549] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.829 [2024-07-15 22:33:10.478827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.829 [2024-07-15 22:33:10.558980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.346  Copying: 56/56 [kB] (average 27 MBps) 00:06:53.346 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.346 22:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.346 { 00:06:53.346 "subsystems": [ 00:06:53.346 { 00:06:53.346 "subsystem": "bdev", 00:06:53.346 "config": [ 00:06:53.346 { 00:06:53.346 "params": { 00:06:53.346 "trtype": "pcie", 00:06:53.346 "traddr": "0000:00:10.0", 00:06:53.346 "name": "Nvme0" 00:06:53.346 }, 00:06:53.346 "method": "bdev_nvme_attach_controller" 00:06:53.346 }, 00:06:53.346 { 00:06:53.346 "method": "bdev_wait_for_examine" 00:06:53.346 } 00:06:53.346 ] 00:06:53.346 } 00:06:53.346 ] 00:06:53.346 } 00:06:53.346 [2024-07-15 22:33:11.048486] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:53.346 [2024-07-15 22:33:11.048573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62758 ] 00:06:53.603 [2024-07-15 22:33:11.183701] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.603 [2024-07-15 22:33:11.333091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.603 [2024-07-15 22:33:11.410607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.120  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:54.120 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:54.120 22:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.056 22:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:55.056 22:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:55.056 22:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.056 22:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.057 [2024-07-15 22:33:12.584880] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:55.057 [2024-07-15 22:33:12.584993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62783 ] 00:06:55.057 { 00:06:55.057 "subsystems": [ 00:06:55.057 { 00:06:55.057 "subsystem": "bdev", 00:06:55.057 "config": [ 00:06:55.057 { 00:06:55.057 "params": { 00:06:55.057 "trtype": "pcie", 00:06:55.057 "traddr": "0000:00:10.0", 00:06:55.057 "name": "Nvme0" 00:06:55.057 }, 00:06:55.057 "method": "bdev_nvme_attach_controller" 00:06:55.057 }, 00:06:55.057 { 00:06:55.057 "method": "bdev_wait_for_examine" 00:06:55.057 } 00:06:55.057 ] 00:06:55.057 } 00:06:55.057 ] 00:06:55.057 } 00:06:55.057 [2024-07-15 22:33:12.719578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.057 [2024-07-15 22:33:12.880234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.318 [2024-07-15 22:33:12.962447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.583  Copying: 56/56 [kB] (average 54 MBps) 00:06:55.583 00:06:55.583 22:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:55.583 22:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:55.583 22:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.583 22:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 { 00:06:55.841 "subsystems": [ 00:06:55.841 { 00:06:55.841 "subsystem": "bdev", 00:06:55.841 "config": [ 00:06:55.841 { 00:06:55.841 "params": { 00:06:55.841 "trtype": "pcie", 00:06:55.842 "traddr": "0000:00:10.0", 00:06:55.842 "name": "Nvme0" 00:06:55.842 }, 00:06:55.842 "method": "bdev_nvme_attach_controller" 00:06:55.842 }, 00:06:55.842 { 00:06:55.842 "method": "bdev_wait_for_examine" 00:06:55.842 } 00:06:55.842 ] 00:06:55.842 } 00:06:55.842 ] 00:06:55.842 } 00:06:55.842 [2024-07-15 22:33:13.463284] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:55.842 [2024-07-15 22:33:13.463384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62796 ] 00:06:55.842 [2024-07-15 22:33:13.608854] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.100 [2024-07-15 22:33:13.759897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.100 [2024-07-15 22:33:13.836365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.615  Copying: 56/56 [kB] (average 54 MBps) 00:06:56.615 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.615 22:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.615 { 00:06:56.616 "subsystems": [ 00:06:56.616 { 00:06:56.616 "subsystem": "bdev", 00:06:56.616 "config": [ 00:06:56.616 { 00:06:56.616 "params": { 00:06:56.616 "trtype": "pcie", 00:06:56.616 "traddr": "0000:00:10.0", 00:06:56.616 "name": "Nvme0" 00:06:56.616 }, 00:06:56.616 "method": "bdev_nvme_attach_controller" 00:06:56.616 }, 00:06:56.616 { 00:06:56.616 "method": "bdev_wait_for_examine" 00:06:56.616 } 00:06:56.616 ] 00:06:56.616 } 00:06:56.616 ] 00:06:56.616 } 00:06:56.616 [2024-07-15 22:33:14.293150] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:56.616 [2024-07-15 22:33:14.293290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62817 ] 00:06:56.616 [2024-07-15 22:33:14.430649] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.873 [2024-07-15 22:33:14.585396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.874 [2024-07-15 22:33:14.650375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.390  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:57.390 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:57.390 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.957 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:57.957 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:57.957 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.957 22:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.957 [2024-07-15 22:33:15.701673] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:57.957 [2024-07-15 22:33:15.702093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62842 ] 00:06:57.957 { 00:06:57.957 "subsystems": [ 00:06:57.957 { 00:06:57.957 "subsystem": "bdev", 00:06:57.957 "config": [ 00:06:57.957 { 00:06:57.957 "params": { 00:06:57.957 "trtype": "pcie", 00:06:57.957 "traddr": "0000:00:10.0", 00:06:57.957 "name": "Nvme0" 00:06:57.957 }, 00:06:57.957 "method": "bdev_nvme_attach_controller" 00:06:57.957 }, 00:06:57.957 { 00:06:57.958 "method": "bdev_wait_for_examine" 00:06:57.958 } 00:06:57.958 ] 00:06:57.958 } 00:06:57.958 ] 00:06:57.958 } 00:06:58.216 [2024-07-15 22:33:15.840499] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.216 [2024-07-15 22:33:15.991667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.475 [2024-07-15 22:33:16.072921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.733  Copying: 48/48 [kB] (average 46 MBps) 00:06:58.733 00:06:58.733 22:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:58.733 22:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:58.733 22:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.733 22:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.991 { 00:06:58.991 "subsystems": [ 00:06:58.991 { 00:06:58.991 "subsystem": "bdev", 00:06:58.991 "config": [ 00:06:58.991 { 00:06:58.991 "params": { 00:06:58.991 "trtype": "pcie", 00:06:58.991 "traddr": "0000:00:10.0", 00:06:58.991 "name": "Nvme0" 00:06:58.991 }, 00:06:58.991 "method": "bdev_nvme_attach_controller" 00:06:58.991 }, 00:06:58.991 { 00:06:58.991 "method": "bdev_wait_for_examine" 00:06:58.991 } 00:06:58.991 ] 00:06:58.991 } 00:06:58.991 ] 00:06:58.991 } 00:06:58.991 [2024-07-15 22:33:16.586289] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:58.991 [2024-07-15 22:33:16.586440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62860 ] 00:06:58.991 [2024-07-15 22:33:16.727361] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.250 [2024-07-15 22:33:16.879286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.250 [2024-07-15 22:33:16.958615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.818  Copying: 48/48 [kB] (average 46 MBps) 00:06:59.818 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.818 22:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.818 { 00:06:59.818 "subsystems": [ 00:06:59.818 { 00:06:59.818 "subsystem": "bdev", 00:06:59.818 "config": [ 00:06:59.818 { 00:06:59.818 "params": { 00:06:59.818 "trtype": "pcie", 00:06:59.818 "traddr": "0000:00:10.0", 00:06:59.818 "name": "Nvme0" 00:06:59.818 }, 00:06:59.818 "method": "bdev_nvme_attach_controller" 00:06:59.818 }, 00:06:59.818 { 00:06:59.818 "method": "bdev_wait_for_examine" 00:06:59.818 } 00:06:59.818 ] 00:06:59.818 } 00:06:59.818 ] 00:06:59.818 } 00:06:59.818 [2024-07-15 22:33:17.457849] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:59.818 [2024-07-15 22:33:17.457992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62876 ] 00:06:59.818 [2024-07-15 22:33:17.596397] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.076 [2024-07-15 22:33:17.746940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.076 [2024-07-15 22:33:17.824638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.593  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:00.593 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:00.593 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:01.160 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:01.160 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.160 22:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 [2024-07-15 22:33:18.858647] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:01.160 [2024-07-15 22:33:18.858971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62901 ] 00:07:01.160 { 00:07:01.160 "subsystems": [ 00:07:01.160 { 00:07:01.160 "subsystem": "bdev", 00:07:01.160 "config": [ 00:07:01.160 { 00:07:01.160 "params": { 00:07:01.160 "trtype": "pcie", 00:07:01.160 "traddr": "0000:00:10.0", 00:07:01.160 "name": "Nvme0" 00:07:01.160 }, 00:07:01.160 "method": "bdev_nvme_attach_controller" 00:07:01.160 }, 00:07:01.160 { 00:07:01.160 "method": "bdev_wait_for_examine" 00:07:01.160 } 00:07:01.160 ] 00:07:01.160 } 00:07:01.160 ] 00:07:01.160 } 00:07:01.160 [2024-07-15 22:33:18.993641] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.418 [2024-07-15 22:33:19.139360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.418 [2024-07-15 22:33:19.199039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.933  Copying: 48/48 [kB] (average 46 MBps) 00:07:01.933 00:07:01.933 22:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:01.934 22:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:01.934 22:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.934 22:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 [2024-07-15 22:33:19.582662] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:01.934 [2024-07-15 22:33:19.582741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62914 ] 00:07:01.934 { 00:07:01.934 "subsystems": [ 00:07:01.934 { 00:07:01.934 "subsystem": "bdev", 00:07:01.934 "config": [ 00:07:01.934 { 00:07:01.934 "params": { 00:07:01.934 "trtype": "pcie", 00:07:01.934 "traddr": "0000:00:10.0", 00:07:01.934 "name": "Nvme0" 00:07:01.934 }, 00:07:01.934 "method": "bdev_nvme_attach_controller" 00:07:01.934 }, 00:07:01.934 { 00:07:01.934 "method": "bdev_wait_for_examine" 00:07:01.934 } 00:07:01.934 ] 00:07:01.934 } 00:07:01.934 ] 00:07:01.934 } 00:07:01.934 [2024-07-15 22:33:19.716671] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.191 [2024-07-15 22:33:19.879545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.191 [2024-07-15 22:33:19.935729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.448  Copying: 48/48 [kB] (average 46 MBps) 00:07:02.448 00:07:02.448 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.705 22:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.705 [2024-07-15 22:33:20.335889] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:02.705 [2024-07-15 22:33:20.336015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62935 ] 00:07:02.705 { 00:07:02.705 "subsystems": [ 00:07:02.705 { 00:07:02.705 "subsystem": "bdev", 00:07:02.705 "config": [ 00:07:02.705 { 00:07:02.705 "params": { 00:07:02.705 "trtype": "pcie", 00:07:02.705 "traddr": "0000:00:10.0", 00:07:02.705 "name": "Nvme0" 00:07:02.705 }, 00:07:02.705 "method": "bdev_nvme_attach_controller" 00:07:02.705 }, 00:07:02.705 { 00:07:02.705 "method": "bdev_wait_for_examine" 00:07:02.706 } 00:07:02.706 ] 00:07:02.706 } 00:07:02.706 ] 00:07:02.706 } 00:07:02.706 [2024-07-15 22:33:20.473824] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.962 [2024-07-15 22:33:20.593280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.962 [2024-07-15 22:33:20.648714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.539  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:03.539 00:07:03.539 00:07:03.539 real 0m18.509s 00:07:03.539 user 0m13.815s 00:07:03.539 sys 0m6.702s 00:07:03.539 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.539 ************************************ 00:07:03.539 END TEST dd_rw 00:07:03.539 ************************************ 00:07:03.539 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.539 22:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:03.539 22:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:03.539 22:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.540 ************************************ 00:07:03.540 START TEST dd_rw_offset 00:07:03.540 ************************************ 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=cat93jutlsaddefihzmzhi76s96ebj4w8hm9mhyk1inlejtgueboar2og2uwail46y51tfdf9rol2nl3b9h64kvytrmismo9jeysatbabpi6bsid1nqze8kachlg4ef9phtfupit9pbkjhzao870335m61nlz3bur5ruvuxnrrk3ljroo50pskvk3czs0xw669f1iqyd74hveyslspxlipmrzk69503w1vfv3vkzl9h9ytdyprez8d3e6cgidkjp8d6l5urg8xn9nzwpb97basfhcx9eo5ce4yg6tepz6athd5ljuhu5u9wddw5785nz7m7il4p557gsdfwm7gy93wtb2cqgas44vjw43d6fbvrdsssvvjp514kc54hanafiq7cpd9zj9i5p6zsfcogcq3qfyb7yvwn901mtvkr66kdyfdcmt63nbd2oouo5ltfn74oua17kw5wafaxqmojbs0ycu7s4ebaseebn4hnf2la1mtonmivvii0tdcxxct5h5zh7b4r0ee68tgnpxfurzcajk71vqlu1k8fd0ph29mfnmcyclqdnogqwz4xcz3hkbpzs13kfcd0blro4ecup2soii93e030n1b11ejfwgvxdj91250aaznavh4mrjydcwtj5b4nzjqo27o0nfsj2zjzung893m77h38bqvy8zgzgow54hwievkia8wonpqhugj9cshgxypletdkqpfosf2z1b15eouy6x1q6cenxv0itbfjg67fqe89fp1yhvx1fur6z2sdp4t0btveqwqadosn7qbdd0o3byyjuxcqhwmm397fplud1g7jixbz9tn1tgrcrc7in752oax1cvslhhzxes5t80xznpgikb2e29cw97wcjw76jnu9twou79f1zlaox3lyndu2vvjmpmnhpw8pofk48pdj31irtihpgxrkueh421yzbwc8fo5d3y1ty22bakqgisszcpx5j81rgoka1211863p1e61zzyu4d43jzwc1tsjqzut472wzqa0mivno8c2di58odfmkr92uawrsdx4uc8lp7zigzaz9de98zdgbhanh8vhniacanwwrwiavf2bmft61505m5digearixcz5gbfzin9jqdxl7oyupzwuo220quxd86lepjb1utqmybily8dcqm3fa6hbdubj7fwf6ojo1w4prdna19qdh5xf5x45vu9pfntgieisbh1mtv9gxdfo5necg9uzcyt3fdvd911pg6e0gtqgahuewutiooy6zeb614cx8fr5r0v18kbl2c7xii0qivrlm3pc4cc3luf2uteshl6slfjabe893ktaomvdyfs8lsnef1qknm7otfwngdhugxuol1ay45s0e9nhe8c16zvbrnmwngff1dzrxxx8d18d6qb8djkgh5p6p41e0c5om3qtaaa3xvz7ha6y14swdl6sei395jp3j2iwc9d8ljnazpve7co225h8etfrafy0vhp0wyfeuh1b2xzw3kmud78hyzwx7ayao6yailuq2uhlicgw7r3mah4n4jhvtrbm038j8cnlcef8rjlmde8u4v5chs2shrkt4n897q2rukrtwq1zi7q852dn56ol18d0qm9fxp8nmng2by0r2bv0qa8rjl5ywwk60u3h8dt1zkjt877kpw0bjmrdnw8zww3tb95kzln2p3vmigd9ywgv7og53j1rfzale5yrkmd8vut4feyzuosgmsbl42o8cbehxudu2lbtf5cevalkpy3kcltj1fwzgzwheuuqmhxn36o4em4eb18ugij9buzfocmge1x5w7n1s8ebrl13fcbjdl0w5csgprjzf2jr1k09eilzwqzkwd8s0zxce66r2rmo8812ywfcx4wmgtxgohpcalnicgucc3jx8242s08f6m3vwpm5v0ff7xdrftszkfb6vfgqba8bujefd3h8i1kgu27l6keye37sq3wuv1q2knthn9zl8c2na572cpyo9eha7khbio36d5eh3zgvjmtuofxcj52n4lf159nb6851lek7nhhpdqmqbhn1fhuya5arj446npewkp86m3pf37l06bk0awqcbjukblvwi7favebytdhzjpzt9f54ayzpatu1v65v8zwzlxdogfrc6hxlrk34h89fcxmtwdf3bbrv8rsmyjixod9vy0a1kv4unjgt9xhc0f68w6jmecjfhwyyq8zgam20678it28q73a0965ma8h59e2d6w0iwaiso82x4pojpiq13kjra1r062j2wgo4mnqppo5s1iw52geppoe30k1igei5nkrqa2yeuxb7zoasge58rlvm6mup3wg2osw5r5pvie48ouqkubuzbcltkijyy4gbvgeyiknk9ql0ivm0flgjeocz1t12mfg85t6wgkaz6m69b72emrfxj97lrafgnn2372szz9cn10lzes4qirdorjnj92iwjji0fi7renryyb8t0vx82z6l5uphffnik64qqr1fknh9l8aidj5f98n1ehljnypg9vhl2lkn0dey7jrfldvtxz68kmnj4dk0nv3dyezdh0cda9x4gkcx6n3rzsvrs7vnruanubhltyu93rsdjxqtnx0vuylt7dcg564ttad1y43d9yuoqrs1einkick0d2luvzirrdkg62e8njwm99v1j1oe2q7fpaa7zhkftx1eza4p9l6i19zijykc39qvhm7j0coryclirrgd2l8l6dpt93eqlfywed48ijnt6r2iefrygldhbzeu410rmupdzqnveqwme8w7sl866en1rvk481dzrl8n1guzuzs5wk8wpk0cboqx3eo8zegwv5cbruzr00b62ftpof7wynutvg804psven65rp2vf7m77dj733vjbryjnaa23aoishwxpbfsmaq31nou2hbrkkl9jwvgq8ec96ua11bqqzipcnytskotpx3x93b49ip7r8v4aitdptiwuupt039byrkqesdieqcqb8pacnykxjhunrecbo3biangx74qjh7lrf6d1dm6cn7dmi3dhwx7rdjc9dhbcvj4d8y7e79rb96o4xr7z2womg0rlauarq11gj8yij4pvdeatnikf0uhhx3n6d7gwek7i9tw4xhez7k0vddfeqyw7m3u1pl9sxu1vqzvrggjbg8vuxob5bjysiwgo8vx2x01llmymjlqv1xx6r2iz8k0celk5nqtq94pv6xuggpsbo8whxastlka0cgld48snpfyljv44ptejvstcynsxyt80njbrpcx84e14blkft07rqhnisczrhqyezcy690uip92zen5nrd5ljpye7s8ctfxz4xy2zza1dcxmpoehmiigo5b4muip4s692yjxggf2497i2r1j8ik3kqnlt919u91rp7smq6umegljve7rz0otfpddmvk7twc79ezyuqmjbx80i22nns0utftjh4izzntvqiqjh4mfpiqhlw9xyjtrp70iui9sc223whzrroavyd1g9i68drkxr15epe384om5ncj00l8x3etacaroe04r7vpvsuh08fbapc00yvyx4rc2ntaqobahjpbs5m3bj81pr66srmfzv6zfgkq4uyteudtfao2o4lsxfdjel6bo6suyom90qx66v5iwjujwvfvu0vlcdrsx2s2gj163yvt5mqrob9wh4p97an9572wplwlvbowr4bnrtjhgmuc05mk42dyjv3kest1qs8u0p4hz5prsjigw5ny8fzqotk2w8jizds6jy4lsz9dy7v8jkbggza8mm80z4d8i5d42usvl4mntnp72oiw88cm16l1mgbcf6h9qtykxeeuqwb1ey9z0stioz9nyg974htwndhx62qbxrfwz0qtwb2u9848du8yyb63tmc8dlaqb8qedze0bz55w51iam8l5zd95xeps8vq0vf3ly6y7stdyik7cshnaaipi7w8k86ez2yk6v56eetukir0sm80eutyni48zopgz1dthim6j7m5oc9ngwuhri31wr9wf8v5p2os1voh4jb2l0uqz6emjdzd1717adypvcgk0e0hdtve264w4hatvaz46auw 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:03.540 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:03.540 { 00:07:03.540 "subsystems": [ 00:07:03.540 { 00:07:03.540 "subsystem": "bdev", 00:07:03.540 "config": [ 00:07:03.540 { 00:07:03.540 "params": { 00:07:03.540 "trtype": "pcie", 00:07:03.540 "traddr": "0000:00:10.0", 00:07:03.540 "name": "Nvme0" 00:07:03.540 }, 00:07:03.540 "method": "bdev_nvme_attach_controller" 00:07:03.540 }, 00:07:03.540 { 00:07:03.540 "method": "bdev_wait_for_examine" 00:07:03.540 } 00:07:03.540 ] 00:07:03.540 } 00:07:03.540 ] 00:07:03.540 } 00:07:03.540 [2024-07-15 22:33:21.242108] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:03.540 [2024-07-15 22:33:21.242230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62966 ] 00:07:03.798 [2024-07-15 22:33:21.384962] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.798 [2024-07-15 22:33:21.514798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.798 [2024-07-15 22:33:21.571180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.310  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.310 00:07:04.310 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:04.310 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:04.310 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:04.310 22:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:04.310 [2024-07-15 22:33:21.956362] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:04.310 [2024-07-15 22:33:21.956466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62985 ] 00:07:04.310 { 00:07:04.310 "subsystems": [ 00:07:04.310 { 00:07:04.310 "subsystem": "bdev", 00:07:04.310 "config": [ 00:07:04.310 { 00:07:04.310 "params": { 00:07:04.310 "trtype": "pcie", 00:07:04.310 "traddr": "0000:00:10.0", 00:07:04.310 "name": "Nvme0" 00:07:04.310 }, 00:07:04.310 "method": "bdev_nvme_attach_controller" 00:07:04.310 }, 00:07:04.310 { 00:07:04.310 "method": "bdev_wait_for_examine" 00:07:04.310 } 00:07:04.310 ] 00:07:04.310 } 00:07:04.310 ] 00:07:04.310 } 00:07:04.310 [2024-07-15 22:33:22.091824] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.567 [2024-07-15 22:33:22.208460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.567 [2024-07-15 22:33:22.261082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.825  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.825 00:07:04.825 22:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:04.825 ************************************ 00:07:04.825 END TEST dd_rw_offset 00:07:04.825 ************************************ 00:07:04.825 22:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ cat93jutlsaddefihzmzhi76s96ebj4w8hm9mhyk1inlejtgueboar2og2uwail46y51tfdf9rol2nl3b9h64kvytrmismo9jeysatbabpi6bsid1nqze8kachlg4ef9phtfupit9pbkjhzao870335m61nlz3bur5ruvuxnrrk3ljroo50pskvk3czs0xw669f1iqyd74hveyslspxlipmrzk69503w1vfv3vkzl9h9ytdyprez8d3e6cgidkjp8d6l5urg8xn9nzwpb97basfhcx9eo5ce4yg6tepz6athd5ljuhu5u9wddw5785nz7m7il4p557gsdfwm7gy93wtb2cqgas44vjw43d6fbvrdsssvvjp514kc54hanafiq7cpd9zj9i5p6zsfcogcq3qfyb7yvwn901mtvkr66kdyfdcmt63nbd2oouo5ltfn74oua17kw5wafaxqmojbs0ycu7s4ebaseebn4hnf2la1mtonmivvii0tdcxxct5h5zh7b4r0ee68tgnpxfurzcajk71vqlu1k8fd0ph29mfnmcyclqdnogqwz4xcz3hkbpzs13kfcd0blro4ecup2soii93e030n1b11ejfwgvxdj91250aaznavh4mrjydcwtj5b4nzjqo27o0nfsj2zjzung893m77h38bqvy8zgzgow54hwievkia8wonpqhugj9cshgxypletdkqpfosf2z1b15eouy6x1q6cenxv0itbfjg67fqe89fp1yhvx1fur6z2sdp4t0btveqwqadosn7qbdd0o3byyjuxcqhwmm397fplud1g7jixbz9tn1tgrcrc7in752oax1cvslhhzxes5t80xznpgikb2e29cw97wcjw76jnu9twou79f1zlaox3lyndu2vvjmpmnhpw8pofk48pdj31irtihpgxrkueh421yzbwc8fo5d3y1ty22bakqgisszcpx5j81rgoka1211863p1e61zzyu4d43jzwc1tsjqzut472wzqa0mivno8c2di58odfmkr92uawrsdx4uc8lp7zigzaz9de98zdgbhanh8vhniacanwwrwiavf2bmft61505m5digearixcz5gbfzin9jqdxl7oyupzwuo220quxd86lepjb1utqmybily8dcqm3fa6hbdubj7fwf6ojo1w4prdna19qdh5xf5x45vu9pfntgieisbh1mtv9gxdfo5necg9uzcyt3fdvd911pg6e0gtqgahuewutiooy6zeb614cx8fr5r0v18kbl2c7xii0qivrlm3pc4cc3luf2uteshl6slfjabe893ktaomvdyfs8lsnef1qknm7otfwngdhugxuol1ay45s0e9nhe8c16zvbrnmwngff1dzrxxx8d18d6qb8djkgh5p6p41e0c5om3qtaaa3xvz7ha6y14swdl6sei395jp3j2iwc9d8ljnazpve7co225h8etfrafy0vhp0wyfeuh1b2xzw3kmud78hyzwx7ayao6yailuq2uhlicgw7r3mah4n4jhvtrbm038j8cnlcef8rjlmde8u4v5chs2shrkt4n897q2rukrtwq1zi7q852dn56ol18d0qm9fxp8nmng2by0r2bv0qa8rjl5ywwk60u3h8dt1zkjt877kpw0bjmrdnw8zww3tb95kzln2p3vmigd9ywgv7og53j1rfzale5yrkmd8vut4feyzuosgmsbl42o8cbehxudu2lbtf5cevalkpy3kcltj1fwzgzwheuuqmhxn36o4em4eb18ugij9buzfocmge1x5w7n1s8ebrl13fcbjdl0w5csgprjzf2jr1k09eilzwqzkwd8s0zxce66r2rmo8812ywfcx4wmgtxgohpcalnicgucc3jx8242s08f6m3vwpm5v0ff7xdrftszkfb6vfgqba8bujefd3h8i1kgu27l6keye37sq3wuv1q2knthn9zl8c2na572cpyo9eha7khbio36d5eh3zgvjmtuofxcj52n4lf159nb6851lek7nhhpdqmqbhn1fhuya5arj446npewkp86m3pf37l06bk0awqcbjukblvwi7favebytdhzjpzt9f54ayzpatu1v65v8zwzlxdogfrc6hxlrk34h89fcxmtwdf3bbrv8rsmyjixod9vy0a1kv4unjgt9xhc0f68w6jmecjfhwyyq8zgam20678it28q73a0965ma8h59e2d6w0iwaiso82x4pojpiq13kjra1r062j2wgo4mnqppo5s1iw52geppoe30k1igei5nkrqa2yeuxb7zoasge58rlvm6mup3wg2osw5r5pvie48ouqkubuzbcltkijyy4gbvgeyiknk9ql0ivm0flgjeocz1t12mfg85t6wgkaz6m69b72emrfxj97lrafgnn2372szz9cn10lzes4qirdorjnj92iwjji0fi7renryyb8t0vx82z6l5uphffnik64qqr1fknh9l8aidj5f98n1ehljnypg9vhl2lkn0dey7jrfldvtxz68kmnj4dk0nv3dyezdh0cda9x4gkcx6n3rzsvrs7vnruanubhltyu93rsdjxqtnx0vuylt7dcg564ttad1y43d9yuoqrs1einkick0d2luvzirrdkg62e8njwm99v1j1oe2q7fpaa7zhkftx1eza4p9l6i19zijykc39qvhm7j0coryclirrgd2l8l6dpt93eqlfywed48ijnt6r2iefrygldhbzeu410rmupdzqnveqwme8w7sl866en1rvk481dzrl8n1guzuzs5wk8wpk0cboqx3eo8zegwv5cbruzr00b62ftpof7wynutvg804psven65rp2vf7m77dj733vjbryjnaa23aoishwxpbfsmaq31nou2hbrkkl9jwvgq8ec96ua11bqqzipcnytskotpx3x93b49ip7r8v4aitdptiwuupt039byrkqesdieqcqb8pacnykxjhunrecbo3biangx74qjh7lrf6d1dm6cn7dmi3dhwx7rdjc9dhbcvj4d8y7e79rb96o4xr7z2womg0rlauarq11gj8yij4pvdeatnikf0uhhx3n6d7gwek7i9tw4xhez7k0vddfeqyw7m3u1pl9sxu1vqzvrggjbg8vuxob5bjysiwgo8vx2x01llmymjlqv1xx6r2iz8k0celk5nqtq94pv6xuggpsbo8whxastlka0cgld48snpfyljv44ptejvstcynsxyt80njbrpcx84e14blkft07rqhnisczrhqyezcy690uip92zen5nrd5ljpye7s8ctfxz4xy2zza1dcxmpoehmiigo5b4muip4s692yjxggf2497i2r1j8ik3kqnlt919u91rp7smq6umegljve7rz0otfpddmvk7twc79ezyuqmjbx80i22nns0utftjh4izzntvqiqjh4mfpiqhlw9xyjtrp70iui9sc223whzrroavyd1g9i68drkxr15epe384om5ncj00l8x3etacaroe04r7vpvsuh08fbapc00yvyx4rc2ntaqobahjpbs5m3bj81pr66srmfzv6zfgkq4uyteudtfao2o4lsxfdjel6bo6suyom90qx66v5iwjujwvfvu0vlcdrsx2s2gj163yvt5mqrob9wh4p97an9572wplwlvbowr4bnrtjhgmuc05mk42dyjv3kest1qs8u0p4hz5prsjigw5ny8fzqotk2w8jizds6jy4lsz9dy7v8jkbggza8mm80z4d8i5d42usvl4mntnp72oiw88cm16l1mgbcf6h9qtykxeeuqwb1ey9z0stioz9nyg974htwndhx62qbxrfwz0qtwb2u9848du8yyb63tmc8dlaqb8qedze0bz55w51iam8l5zd95xeps8vq0vf3ly6y7stdyik7cshnaaipi7w8k86ez2yk6v56eetukir0sm80eutyni48zopgz1dthim6j7m5oc9ngwuhri31wr9wf8v5p2os1voh4jb2l0uqz6emjdzd1717adypvcgk0e0hdtve264w4hatvaz46auw == \c\a\t\9\3\j\u\t\l\s\a\d\d\e\f\i\h\z\m\z\h\i\7\6\s\9\6\e\b\j\4\w\8\h\m\9\m\h\y\k\1\i\n\l\e\j\t\g\u\e\b\o\a\r\2\o\g\2\u\w\a\i\l\4\6\y\5\1\t\f\d\f\9\r\o\l\2\n\l\3\b\9\h\6\4\k\v\y\t\r\m\i\s\m\o\9\j\e\y\s\a\t\b\a\b\p\i\6\b\s\i\d\1\n\q\z\e\8\k\a\c\h\l\g\4\e\f\9\p\h\t\f\u\p\i\t\9\p\b\k\j\h\z\a\o\8\7\0\3\3\5\m\6\1\n\l\z\3\b\u\r\5\r\u\v\u\x\n\r\r\k\3\l\j\r\o\o\5\0\p\s\k\v\k\3\c\z\s\0\x\w\6\6\9\f\1\i\q\y\d\7\4\h\v\e\y\s\l\s\p\x\l\i\p\m\r\z\k\6\9\5\0\3\w\1\v\f\v\3\v\k\z\l\9\h\9\y\t\d\y\p\r\e\z\8\d\3\e\6\c\g\i\d\k\j\p\8\d\6\l\5\u\r\g\8\x\n\9\n\z\w\p\b\9\7\b\a\s\f\h\c\x\9\e\o\5\c\e\4\y\g\6\t\e\p\z\6\a\t\h\d\5\l\j\u\h\u\5\u\9\w\d\d\w\5\7\8\5\n\z\7\m\7\i\l\4\p\5\5\7\g\s\d\f\w\m\7\g\y\9\3\w\t\b\2\c\q\g\a\s\4\4\v\j\w\4\3\d\6\f\b\v\r\d\s\s\s\v\v\j\p\5\1\4\k\c\5\4\h\a\n\a\f\i\q\7\c\p\d\9\z\j\9\i\5\p\6\z\s\f\c\o\g\c\q\3\q\f\y\b\7\y\v\w\n\9\0\1\m\t\v\k\r\6\6\k\d\y\f\d\c\m\t\6\3\n\b\d\2\o\o\u\o\5\l\t\f\n\7\4\o\u\a\1\7\k\w\5\w\a\f\a\x\q\m\o\j\b\s\0\y\c\u\7\s\4\e\b\a\s\e\e\b\n\4\h\n\f\2\l\a\1\m\t\o\n\m\i\v\v\i\i\0\t\d\c\x\x\c\t\5\h\5\z\h\7\b\4\r\0\e\e\6\8\t\g\n\p\x\f\u\r\z\c\a\j\k\7\1\v\q\l\u\1\k\8\f\d\0\p\h\2\9\m\f\n\m\c\y\c\l\q\d\n\o\g\q\w\z\4\x\c\z\3\h\k\b\p\z\s\1\3\k\f\c\d\0\b\l\r\o\4\e\c\u\p\2\s\o\i\i\9\3\e\0\3\0\n\1\b\1\1\e\j\f\w\g\v\x\d\j\9\1\2\5\0\a\a\z\n\a\v\h\4\m\r\j\y\d\c\w\t\j\5\b\4\n\z\j\q\o\2\7\o\0\n\f\s\j\2\z\j\z\u\n\g\8\9\3\m\7\7\h\3\8\b\q\v\y\8\z\g\z\g\o\w\5\4\h\w\i\e\v\k\i\a\8\w\o\n\p\q\h\u\g\j\9\c\s\h\g\x\y\p\l\e\t\d\k\q\p\f\o\s\f\2\z\1\b\1\5\e\o\u\y\6\x\1\q\6\c\e\n\x\v\0\i\t\b\f\j\g\6\7\f\q\e\8\9\f\p\1\y\h\v\x\1\f\u\r\6\z\2\s\d\p\4\t\0\b\t\v\e\q\w\q\a\d\o\s\n\7\q\b\d\d\0\o\3\b\y\y\j\u\x\c\q\h\w\m\m\3\9\7\f\p\l\u\d\1\g\7\j\i\x\b\z\9\t\n\1\t\g\r\c\r\c\7\i\n\7\5\2\o\a\x\1\c\v\s\l\h\h\z\x\e\s\5\t\8\0\x\z\n\p\g\i\k\b\2\e\2\9\c\w\9\7\w\c\j\w\7\6\j\n\u\9\t\w\o\u\7\9\f\1\z\l\a\o\x\3\l\y\n\d\u\2\v\v\j\m\p\m\n\h\p\w\8\p\o\f\k\4\8\p\d\j\3\1\i\r\t\i\h\p\g\x\r\k\u\e\h\4\2\1\y\z\b\w\c\8\f\o\5\d\3\y\1\t\y\2\2\b\a\k\q\g\i\s\s\z\c\p\x\5\j\8\1\r\g\o\k\a\1\2\1\1\8\6\3\p\1\e\6\1\z\z\y\u\4\d\4\3\j\z\w\c\1\t\s\j\q\z\u\t\4\7\2\w\z\q\a\0\m\i\v\n\o\8\c\2\d\i\5\8\o\d\f\m\k\r\9\2\u\a\w\r\s\d\x\4\u\c\8\l\p\7\z\i\g\z\a\z\9\d\e\9\8\z\d\g\b\h\a\n\h\8\v\h\n\i\a\c\a\n\w\w\r\w\i\a\v\f\2\b\m\f\t\6\1\5\0\5\m\5\d\i\g\e\a\r\i\x\c\z\5\g\b\f\z\i\n\9\j\q\d\x\l\7\o\y\u\p\z\w\u\o\2\2\0\q\u\x\d\8\6\l\e\p\j\b\1\u\t\q\m\y\b\i\l\y\8\d\c\q\m\3\f\a\6\h\b\d\u\b\j\7\f\w\f\6\o\j\o\1\w\4\p\r\d\n\a\1\9\q\d\h\5\x\f\5\x\4\5\v\u\9\p\f\n\t\g\i\e\i\s\b\h\1\m\t\v\9\g\x\d\f\o\5\n\e\c\g\9\u\z\c\y\t\3\f\d\v\d\9\1\1\p\g\6\e\0\g\t\q\g\a\h\u\e\w\u\t\i\o\o\y\6\z\e\b\6\1\4\c\x\8\f\r\5\r\0\v\1\8\k\b\l\2\c\7\x\i\i\0\q\i\v\r\l\m\3\p\c\4\c\c\3\l\u\f\2\u\t\e\s\h\l\6\s\l\f\j\a\b\e\8\9\3\k\t\a\o\m\v\d\y\f\s\8\l\s\n\e\f\1\q\k\n\m\7\o\t\f\w\n\g\d\h\u\g\x\u\o\l\1\a\y\4\5\s\0\e\9\n\h\e\8\c\1\6\z\v\b\r\n\m\w\n\g\f\f\1\d\z\r\x\x\x\8\d\1\8\d\6\q\b\8\d\j\k\g\h\5\p\6\p\4\1\e\0\c\5\o\m\3\q\t\a\a\a\3\x\v\z\7\h\a\6\y\1\4\s\w\d\l\6\s\e\i\3\9\5\j\p\3\j\2\i\w\c\9\d\8\l\j\n\a\z\p\v\e\7\c\o\2\2\5\h\8\e\t\f\r\a\f\y\0\v\h\p\0\w\y\f\e\u\h\1\b\2\x\z\w\3\k\m\u\d\7\8\h\y\z\w\x\7\a\y\a\o\6\y\a\i\l\u\q\2\u\h\l\i\c\g\w\7\r\3\m\a\h\4\n\4\j\h\v\t\r\b\m\0\3\8\j\8\c\n\l\c\e\f\8\r\j\l\m\d\e\8\u\4\v\5\c\h\s\2\s\h\r\k\t\4\n\8\9\7\q\2\r\u\k\r\t\w\q\1\z\i\7\q\8\5\2\d\n\5\6\o\l\1\8\d\0\q\m\9\f\x\p\8\n\m\n\g\2\b\y\0\r\2\b\v\0\q\a\8\r\j\l\5\y\w\w\k\6\0\u\3\h\8\d\t\1\z\k\j\t\8\7\7\k\p\w\0\b\j\m\r\d\n\w\8\z\w\w\3\t\b\9\5\k\z\l\n\2\p\3\v\m\i\g\d\9\y\w\g\v\7\o\g\5\3\j\1\r\f\z\a\l\e\5\y\r\k\m\d\8\v\u\t\4\f\e\y\z\u\o\s\g\m\s\b\l\4\2\o\8\c\b\e\h\x\u\d\u\2\l\b\t\f\5\c\e\v\a\l\k\p\y\3\k\c\l\t\j\1\f\w\z\g\z\w\h\e\u\u\q\m\h\x\n\3\6\o\4\e\m\4\e\b\1\8\u\g\i\j\9\b\u\z\f\o\c\m\g\e\1\x\5\w\7\n\1\s\8\e\b\r\l\1\3\f\c\b\j\d\l\0\w\5\c\s\g\p\r\j\z\f\2\j\r\1\k\0\9\e\i\l\z\w\q\z\k\w\d\8\s\0\z\x\c\e\6\6\r\2\r\m\o\8\8\1\2\y\w\f\c\x\4\w\m\g\t\x\g\o\h\p\c\a\l\n\i\c\g\u\c\c\3\j\x\8\2\4\2\s\0\8\f\6\m\3\v\w\p\m\5\v\0\f\f\7\x\d\r\f\t\s\z\k\f\b\6\v\f\g\q\b\a\8\b\u\j\e\f\d\3\h\8\i\1\k\g\u\2\7\l\6\k\e\y\e\3\7\s\q\3\w\u\v\1\q\2\k\n\t\h\n\9\z\l\8\c\2\n\a\5\7\2\c\p\y\o\9\e\h\a\7\k\h\b\i\o\3\6\d\5\e\h\3\z\g\v\j\m\t\u\o\f\x\c\j\5\2\n\4\l\f\1\5\9\n\b\6\8\5\1\l\e\k\7\n\h\h\p\d\q\m\q\b\h\n\1\f\h\u\y\a\5\a\r\j\4\4\6\n\p\e\w\k\p\8\6\m\3\p\f\3\7\l\0\6\b\k\0\a\w\q\c\b\j\u\k\b\l\v\w\i\7\f\a\v\e\b\y\t\d\h\z\j\p\z\t\9\f\5\4\a\y\z\p\a\t\u\1\v\6\5\v\8\z\w\z\l\x\d\o\g\f\r\c\6\h\x\l\r\k\3\4\h\8\9\f\c\x\m\t\w\d\f\3\b\b\r\v\8\r\s\m\y\j\i\x\o\d\9\v\y\0\a\1\k\v\4\u\n\j\g\t\9\x\h\c\0\f\6\8\w\6\j\m\e\c\j\f\h\w\y\y\q\8\z\g\a\m\2\0\6\7\8\i\t\2\8\q\7\3\a\0\9\6\5\m\a\8\h\5\9\e\2\d\6\w\0\i\w\a\i\s\o\8\2\x\4\p\o\j\p\i\q\1\3\k\j\r\a\1\r\0\6\2\j\2\w\g\o\4\m\n\q\p\p\o\5\s\1\i\w\5\2\g\e\p\p\o\e\3\0\k\1\i\g\e\i\5\n\k\r\q\a\2\y\e\u\x\b\7\z\o\a\s\g\e\5\8\r\l\v\m\6\m\u\p\3\w\g\2\o\s\w\5\r\5\p\v\i\e\4\8\o\u\q\k\u\b\u\z\b\c\l\t\k\i\j\y\y\4\g\b\v\g\e\y\i\k\n\k\9\q\l\0\i\v\m\0\f\l\g\j\e\o\c\z\1\t\1\2\m\f\g\8\5\t\6\w\g\k\a\z\6\m\6\9\b\7\2\e\m\r\f\x\j\9\7\l\r\a\f\g\n\n\2\3\7\2\s\z\z\9\c\n\1\0\l\z\e\s\4\q\i\r\d\o\r\j\n\j\9\2\i\w\j\j\i\0\f\i\7\r\e\n\r\y\y\b\8\t\0\v\x\8\2\z\6\l\5\u\p\h\f\f\n\i\k\6\4\q\q\r\1\f\k\n\h\9\l\8\a\i\d\j\5\f\9\8\n\1\e\h\l\j\n\y\p\g\9\v\h\l\2\l\k\n\0\d\e\y\7\j\r\f\l\d\v\t\x\z\6\8\k\m\n\j\4\d\k\0\n\v\3\d\y\e\z\d\h\0\c\d\a\9\x\4\g\k\c\x\6\n\3\r\z\s\v\r\s\7\v\n\r\u\a\n\u\b\h\l\t\y\u\9\3\r\s\d\j\x\q\t\n\x\0\v\u\y\l\t\7\d\c\g\5\6\4\t\t\a\d\1\y\4\3\d\9\y\u\o\q\r\s\1\e\i\n\k\i\c\k\0\d\2\l\u\v\z\i\r\r\d\k\g\6\2\e\8\n\j\w\m\9\9\v\1\j\1\o\e\2\q\7\f\p\a\a\7\z\h\k\f\t\x\1\e\z\a\4\p\9\l\6\i\1\9\z\i\j\y\k\c\3\9\q\v\h\m\7\j\0\c\o\r\y\c\l\i\r\r\g\d\2\l\8\l\6\d\p\t\9\3\e\q\l\f\y\w\e\d\4\8\i\j\n\t\6\r\2\i\e\f\r\y\g\l\d\h\b\z\e\u\4\1\0\r\m\u\p\d\z\q\n\v\e\q\w\m\e\8\w\7\s\l\8\6\6\e\n\1\r\v\k\4\8\1\d\z\r\l\8\n\1\g\u\z\u\z\s\5\w\k\8\w\p\k\0\c\b\o\q\x\3\e\o\8\z\e\g\w\v\5\c\b\r\u\z\r\0\0\b\6\2\f\t\p\o\f\7\w\y\n\u\t\v\g\8\0\4\p\s\v\e\n\6\5\r\p\2\v\f\7\m\7\7\d\j\7\3\3\v\j\b\r\y\j\n\a\a\2\3\a\o\i\s\h\w\x\p\b\f\s\m\a\q\3\1\n\o\u\2\h\b\r\k\k\l\9\j\w\v\g\q\8\e\c\9\6\u\a\1\1\b\q\q\z\i\p\c\n\y\t\s\k\o\t\p\x\3\x\9\3\b\4\9\i\p\7\r\8\v\4\a\i\t\d\p\t\i\w\u\u\p\t\0\3\9\b\y\r\k\q\e\s\d\i\e\q\c\q\b\8\p\a\c\n\y\k\x\j\h\u\n\r\e\c\b\o\3\b\i\a\n\g\x\7\4\q\j\h\7\l\r\f\6\d\1\d\m\6\c\n\7\d\m\i\3\d\h\w\x\7\r\d\j\c\9\d\h\b\c\v\j\4\d\8\y\7\e\7\9\r\b\9\6\o\4\x\r\7\z\2\w\o\m\g\0\r\l\a\u\a\r\q\1\1\g\j\8\y\i\j\4\p\v\d\e\a\t\n\i\k\f\0\u\h\h\x\3\n\6\d\7\g\w\e\k\7\i\9\t\w\4\x\h\e\z\7\k\0\v\d\d\f\e\q\y\w\7\m\3\u\1\p\l\9\s\x\u\1\v\q\z\v\r\g\g\j\b\g\8\v\u\x\o\b\5\b\j\y\s\i\w\g\o\8\v\x\2\x\0\1\l\l\m\y\m\j\l\q\v\1\x\x\6\r\2\i\z\8\k\0\c\e\l\k\5\n\q\t\q\9\4\p\v\6\x\u\g\g\p\s\b\o\8\w\h\x\a\s\t\l\k\a\0\c\g\l\d\4\8\s\n\p\f\y\l\j\v\4\4\p\t\e\j\v\s\t\c\y\n\s\x\y\t\8\0\n\j\b\r\p\c\x\8\4\e\1\4\b\l\k\f\t\0\7\r\q\h\n\i\s\c\z\r\h\q\y\e\z\c\y\6\9\0\u\i\p\9\2\z\e\n\5\n\r\d\5\l\j\p\y\e\7\s\8\c\t\f\x\z\4\x\y\2\z\z\a\1\d\c\x\m\p\o\e\h\m\i\i\g\o\5\b\4\m\u\i\p\4\s\6\9\2\y\j\x\g\g\f\2\4\9\7\i\2\r\1\j\8\i\k\3\k\q\n\l\t\9\1\9\u\9\1\r\p\7\s\m\q\6\u\m\e\g\l\j\v\e\7\r\z\0\o\t\f\p\d\d\m\v\k\7\t\w\c\7\9\e\z\y\u\q\m\j\b\x\8\0\i\2\2\n\n\s\0\u\t\f\t\j\h\4\i\z\z\n\t\v\q\i\q\j\h\4\m\f\p\i\q\h\l\w\9\x\y\j\t\r\p\7\0\i\u\i\9\s\c\2\2\3\w\h\z\r\r\o\a\v\y\d\1\g\9\i\6\8\d\r\k\x\r\1\5\e\p\e\3\8\4\o\m\5\n\c\j\0\0\l\8\x\3\e\t\a\c\a\r\o\e\0\4\r\7\v\p\v\s\u\h\0\8\f\b\a\p\c\0\0\y\v\y\x\4\r\c\2\n\t\a\q\o\b\a\h\j\p\b\s\5\m\3\b\j\8\1\p\r\6\6\s\r\m\f\z\v\6\z\f\g\k\q\4\u\y\t\e\u\d\t\f\a\o\2\o\4\l\s\x\f\d\j\e\l\6\b\o\6\s\u\y\o\m\9\0\q\x\6\6\v\5\i\w\j\u\j\w\v\f\v\u\0\v\l\c\d\r\s\x\2\s\2\g\j\1\6\3\y\v\t\5\m\q\r\o\b\9\w\h\4\p\9\7\a\n\9\5\7\2\w\p\l\w\l\v\b\o\w\r\4\b\n\r\t\j\h\g\m\u\c\0\5\m\k\4\2\d\y\j\v\3\k\e\s\t\1\q\s\8\u\0\p\4\h\z\5\p\r\s\j\i\g\w\5\n\y\8\f\z\q\o\t\k\2\w\8\j\i\z\d\s\6\j\y\4\l\s\z\9\d\y\7\v\8\j\k\b\g\g\z\a\8\m\m\8\0\z\4\d\8\i\5\d\4\2\u\s\v\l\4\m\n\t\n\p\7\2\o\i\w\8\8\c\m\1\6\l\1\m\g\b\c\f\6\h\9\q\t\y\k\x\e\e\u\q\w\b\1\e\y\9\z\0\s\t\i\o\z\9\n\y\g\9\7\4\h\t\w\n\d\h\x\6\2\q\b\x\r\f\w\z\0\q\t\w\b\2\u\9\8\4\8\d\u\8\y\y\b\6\3\t\m\c\8\d\l\a\q\b\8\q\e\d\z\e\0\b\z\5\5\w\5\1\i\a\m\8\l\5\z\d\9\5\x\e\p\s\8\v\q\0\v\f\3\l\y\6\y\7\s\t\d\y\i\k\7\c\s\h\n\a\a\i\p\i\7\w\8\k\8\6\e\z\2\y\k\6\v\5\6\e\e\t\u\k\i\r\0\s\m\8\0\e\u\t\y\n\i\4\8\z\o\p\g\z\1\d\t\h\i\m\6\j\7\m\5\o\c\9\n\g\w\u\h\r\i\3\1\w\r\9\w\f\8\v\5\p\2\o\s\1\v\o\h\4\j\b\2\l\0\u\q\z\6\e\m\j\d\z\d\1\7\1\7\a\d\y\p\v\c\g\k\0\e\0\h\d\t\v\e\2\6\4\w\4\h\a\t\v\a\z\4\6\a\u\w ]] 00:07:04.825 00:07:04.825 real 0m1.480s 00:07:04.825 user 0m1.051s 00:07:04.825 sys 0m0.594s 00:07:04.825 22:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.825 22:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.083 22:33:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 [2024-07-15 22:33:22.707605] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:05.083 [2024-07-15 22:33:22.707714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63020 ] 00:07:05.083 { 00:07:05.083 "subsystems": [ 00:07:05.083 { 00:07:05.083 "subsystem": "bdev", 00:07:05.083 "config": [ 00:07:05.083 { 00:07:05.083 "params": { 00:07:05.083 "trtype": "pcie", 00:07:05.083 "traddr": "0000:00:10.0", 00:07:05.083 "name": "Nvme0" 00:07:05.083 }, 00:07:05.083 "method": "bdev_nvme_attach_controller" 00:07:05.083 }, 00:07:05.083 { 00:07:05.083 "method": "bdev_wait_for_examine" 00:07:05.083 } 00:07:05.083 ] 00:07:05.083 } 00:07:05.083 ] 00:07:05.083 } 00:07:05.083 [2024-07-15 22:33:22.847683] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.341 [2024-07-15 22:33:22.977528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.341 [2024-07-15 22:33:23.032870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.598  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:05.598 00:07:05.598 22:33:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.598 ************************************ 00:07:05.598 END TEST spdk_dd_basic_rw 00:07:05.598 ************************************ 00:07:05.598 00:07:05.598 real 0m22.027s 00:07:05.598 user 0m16.120s 00:07:05.598 sys 0m8.006s 00:07:05.598 22:33:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.598 22:33:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.598 22:33:23 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:05.598 22:33:23 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.598 22:33:23 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.598 22:33:23 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.598 22:33:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.856 ************************************ 00:07:05.856 START TEST spdk_dd_posix 00:07:05.856 ************************************ 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.856 * Looking for test storage... 00:07:05.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:05.856 * First test run, liburing in use 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.856 ************************************ 00:07:05.856 START TEST dd_flag_append 00:07:05.856 ************************************ 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=4d3v7ss7q8msk7e15b1uecxp0wie98xp 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=mw6mtb6wfwervrdn07v6gur2nzogyj4m 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 4d3v7ss7q8msk7e15b1uecxp0wie98xp 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s mw6mtb6wfwervrdn07v6gur2nzogyj4m 00:07:05.856 22:33:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:05.856 [2024-07-15 22:33:23.602362] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:05.856 [2024-07-15 22:33:23.602447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63078 ] 00:07:06.113 [2024-07-15 22:33:23.735836] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.113 [2024-07-15 22:33:23.852943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.113 [2024-07-15 22:33:23.907625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.371  Copying: 32/32 [B] (average 31 kBps) 00:07:06.371 00:07:06.371 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ mw6mtb6wfwervrdn07v6gur2nzogyj4m4d3v7ss7q8msk7e15b1uecxp0wie98xp == \m\w\6\m\t\b\6\w\f\w\e\r\v\r\d\n\0\7\v\6\g\u\r\2\n\z\o\g\y\j\4\m\4\d\3\v\7\s\s\7\q\8\m\s\k\7\e\1\5\b\1\u\e\c\x\p\0\w\i\e\9\8\x\p ]] 00:07:06.371 00:07:06.371 real 0m0.609s 00:07:06.371 user 0m0.351s 00:07:06.371 sys 0m0.267s 00:07:06.371 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.371 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:06.371 ************************************ 00:07:06.371 END TEST dd_flag_append 00:07:06.371 ************************************ 00:07:06.371 22:33:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:06.372 22:33:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:06.372 22:33:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.372 22:33:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.372 22:33:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.630 ************************************ 00:07:06.630 START TEST dd_flag_directory 00:07:06.630 ************************************ 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.630 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.630 [2024-07-15 22:33:24.280850] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:06.630 [2024-07-15 22:33:24.280975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:07:06.630 [2024-07-15 22:33:24.416751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.889 [2024-07-15 22:33:24.536931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.889 [2024-07-15 22:33:24.590456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.889 [2024-07-15 22:33:24.624641] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.889 [2024-07-15 22:33:24.624703] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.889 [2024-07-15 22:33:24.624718] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.205 [2024-07-15 22:33:24.738466] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.205 22:33:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.205 [2024-07-15 22:33:24.886745] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:07.205 [2024-07-15 22:33:24.886825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63122 ] 00:07:07.205 [2024-07-15 22:33:25.017577] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.464 [2024-07-15 22:33:25.136098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.464 [2024-07-15 22:33:25.189455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.464 [2024-07-15 22:33:25.222782] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.464 [2024-07-15 22:33:25.222836] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.464 [2024-07-15 22:33:25.222851] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.722 [2024-07-15 22:33:25.333072] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.722 00:07:07.722 real 0m1.246s 00:07:07.722 user 0m0.752s 00:07:07.722 sys 0m0.279s 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 END TEST dd_flag_directory 00:07:07.722 ************************************ 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 START TEST dd_flag_nofollow 00:07:07.722 ************************************ 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.722 22:33:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.980 [2024-07-15 22:33:25.579722] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:07.980 [2024-07-15 22:33:25.579821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:07:07.980 [2024-07-15 22:33:25.716395] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.980 [2024-07-15 22:33:25.797804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.238 [2024-07-15 22:33:25.850965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.238 [2024-07-15 22:33:25.881927] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.238 [2024-07-15 22:33:25.882016] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.238 [2024-07-15 22:33:25.882030] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.238 [2024-07-15 22:33:25.993633] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.497 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.497 [2024-07-15 22:33:26.167699] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:08.497 [2024-07-15 22:33:26.167850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63160 ] 00:07:08.497 [2024-07-15 22:33:26.310780] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.757 [2024-07-15 22:33:26.432971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.757 [2024-07-15 22:33:26.487940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.757 [2024-07-15 22:33:26.522250] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.757 [2024-07-15 22:33:26.522290] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.757 [2024-07-15 22:33:26.522305] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.016 [2024-07-15 22:33:26.637606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:09.016 22:33:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.016 [2024-07-15 22:33:26.804934] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:09.016 [2024-07-15 22:33:26.805036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63173 ] 00:07:09.274 [2024-07-15 22:33:26.944262] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.275 [2024-07-15 22:33:27.039787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.275 [2024-07-15 22:33:27.094815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.532  Copying: 512/512 [B] (average 500 kBps) 00:07:09.532 00:07:09.533 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ kfdu2d1yqpdxi9qaquenqsbvq9nzdmfg0a4c2uwr6ik0ju9ljncojvk8rplxhe5malf1z8tr0lauoeomub4ohu6h46qqfhxbqhfofv8pp2jnqy1fcr3zo25ziq0zl3w6hwzwlhajz0cehkeoisusna8xuvph684xrrqma84f6v14zcjw3p2mz8cpl4fxbgbtajpycdbkbww7l75f05hccab0s8piryh57z34frqfa9u6b0zf44is9crfyfwxt1og8c7udw6k0edczucmljksjvn0sbrbkhgx3yjuanagl5g0m4chu6awpkdfqvkql1dhere0fu8r4idbdcun6vgy1fqnc7e3mxuxrlibtp2hy8jh7qfwzxq437bvno5moey6q9bte7cptdhpbw0anpqkeurusc9x2xo4g1gcqng1t1y33irquwaya8xeql12jiq1bjwxpc6vo6xwg2lijvl6cevwennhjh1g8rm3qwq5t8bzppoajaro00h1u7kuaslq == \k\f\d\u\2\d\1\y\q\p\d\x\i\9\q\a\q\u\e\n\q\s\b\v\q\9\n\z\d\m\f\g\0\a\4\c\2\u\w\r\6\i\k\0\j\u\9\l\j\n\c\o\j\v\k\8\r\p\l\x\h\e\5\m\a\l\f\1\z\8\t\r\0\l\a\u\o\e\o\m\u\b\4\o\h\u\6\h\4\6\q\q\f\h\x\b\q\h\f\o\f\v\8\p\p\2\j\n\q\y\1\f\c\r\3\z\o\2\5\z\i\q\0\z\l\3\w\6\h\w\z\w\l\h\a\j\z\0\c\e\h\k\e\o\i\s\u\s\n\a\8\x\u\v\p\h\6\8\4\x\r\r\q\m\a\8\4\f\6\v\1\4\z\c\j\w\3\p\2\m\z\8\c\p\l\4\f\x\b\g\b\t\a\j\p\y\c\d\b\k\b\w\w\7\l\7\5\f\0\5\h\c\c\a\b\0\s\8\p\i\r\y\h\5\7\z\3\4\f\r\q\f\a\9\u\6\b\0\z\f\4\4\i\s\9\c\r\f\y\f\w\x\t\1\o\g\8\c\7\u\d\w\6\k\0\e\d\c\z\u\c\m\l\j\k\s\j\v\n\0\s\b\r\b\k\h\g\x\3\y\j\u\a\n\a\g\l\5\g\0\m\4\c\h\u\6\a\w\p\k\d\f\q\v\k\q\l\1\d\h\e\r\e\0\f\u\8\r\4\i\d\b\d\c\u\n\6\v\g\y\1\f\q\n\c\7\e\3\m\x\u\x\r\l\i\b\t\p\2\h\y\8\j\h\7\q\f\w\z\x\q\4\3\7\b\v\n\o\5\m\o\e\y\6\q\9\b\t\e\7\c\p\t\d\h\p\b\w\0\a\n\p\q\k\e\u\r\u\s\c\9\x\2\x\o\4\g\1\g\c\q\n\g\1\t\1\y\3\3\i\r\q\u\w\a\y\a\8\x\e\q\l\1\2\j\i\q\1\b\j\w\x\p\c\6\v\o\6\x\w\g\2\l\i\j\v\l\6\c\e\v\w\e\n\n\h\j\h\1\g\8\r\m\3\q\w\q\5\t\8\b\z\p\p\o\a\j\a\r\o\0\0\h\1\u\7\k\u\a\s\l\q ]] 00:07:09.533 00:07:09.533 real 0m1.846s 00:07:09.533 user 0m1.043s 00:07:09.533 sys 0m0.615s 00:07:09.533 ************************************ 00:07:09.533 END TEST dd_flag_nofollow 00:07:09.533 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.533 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:09.533 ************************************ 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.790 ************************************ 00:07:09.790 START TEST dd_flag_noatime 00:07:09.790 ************************************ 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.790 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721082807 00:07:09.791 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.791 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721082807 00:07:09.791 22:33:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:10.723 22:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.723 [2024-07-15 22:33:28.489442] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:10.723 [2024-07-15 22:33:28.489567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63210 ] 00:07:10.981 [2024-07-15 22:33:28.631037] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.981 [2024-07-15 22:33:28.757625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.981 [2024-07-15 22:33:28.814530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.497  Copying: 512/512 [B] (average 500 kBps) 00:07:11.497 00:07:11.497 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.497 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721082807 )) 00:07:11.497 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.497 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721082807 )) 00:07:11.497 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.497 [2024-07-15 22:33:29.162165] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:11.497 [2024-07-15 22:33:29.162300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:07:11.497 [2024-07-15 22:33:29.299329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.778 [2024-07-15 22:33:29.426276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.778 [2024-07-15 22:33:29.483019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.106  Copying: 512/512 [B] (average 500 kBps) 00:07:12.106 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721082809 )) 00:07:12.106 00:07:12.106 real 0m2.345s 00:07:12.106 user 0m0.794s 00:07:12.106 sys 0m0.591s 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:12.106 ************************************ 00:07:12.106 END TEST dd_flag_noatime 00:07:12.106 ************************************ 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:12.106 ************************************ 00:07:12.106 START TEST dd_flags_misc 00:07:12.106 ************************************ 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.106 22:33:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.106 [2024-07-15 22:33:29.870541] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:12.106 [2024-07-15 22:33:29.870666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63263 ] 00:07:12.365 [2024-07-15 22:33:30.012604] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.365 [2024-07-15 22:33:30.133233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.365 [2024-07-15 22:33:30.187478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.623  Copying: 512/512 [B] (average 500 kBps) 00:07:12.623 00:07:12.623 22:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r9a24btocogi606abwoz2lkqczu5aehq49b2il93d1enfpls9xinsm4wt7n4ead9e4r8q5olxziqk7psgtwjf40lo2xt2emlfw5qd7ypzwwmigrpg2kzlarue6pm3aium1b5swpn2coqbg9ed5sz4gqmjmefxhxgylkyys2r7gu3paac3q6zk4qcqn3fs1hazxhj7a56xx44ly9dlpcbixoax12acaplk2be5g24yp0ykrmi7btad8svs5p3m0nllvo4yf6a79supof7wi25czkmqn3tl4sngv1mrpz9ukn5dehbt2x828g5a1e5b8b5tfxx5iz9xoi5n0vps15cmlnq5hzigwer8k5oriigednplbjnal96qhu8oa586a62vdmo37nhgtoa36nk0rrkfnm6p90bcvf9fzpcfmo23qypdwsb6ha1jqunga6ljz4ncop83jgincn5uoasbatjmpjh12lq1cviax7h2wbpe77zayns260hftaxoej46eqv == \r\9\a\2\4\b\t\o\c\o\g\i\6\0\6\a\b\w\o\z\2\l\k\q\c\z\u\5\a\e\h\q\4\9\b\2\i\l\9\3\d\1\e\n\f\p\l\s\9\x\i\n\s\m\4\w\t\7\n\4\e\a\d\9\e\4\r\8\q\5\o\l\x\z\i\q\k\7\p\s\g\t\w\j\f\4\0\l\o\2\x\t\2\e\m\l\f\w\5\q\d\7\y\p\z\w\w\m\i\g\r\p\g\2\k\z\l\a\r\u\e\6\p\m\3\a\i\u\m\1\b\5\s\w\p\n\2\c\o\q\b\g\9\e\d\5\s\z\4\g\q\m\j\m\e\f\x\h\x\g\y\l\k\y\y\s\2\r\7\g\u\3\p\a\a\c\3\q\6\z\k\4\q\c\q\n\3\f\s\1\h\a\z\x\h\j\7\a\5\6\x\x\4\4\l\y\9\d\l\p\c\b\i\x\o\a\x\1\2\a\c\a\p\l\k\2\b\e\5\g\2\4\y\p\0\y\k\r\m\i\7\b\t\a\d\8\s\v\s\5\p\3\m\0\n\l\l\v\o\4\y\f\6\a\7\9\s\u\p\o\f\7\w\i\2\5\c\z\k\m\q\n\3\t\l\4\s\n\g\v\1\m\r\p\z\9\u\k\n\5\d\e\h\b\t\2\x\8\2\8\g\5\a\1\e\5\b\8\b\5\t\f\x\x\5\i\z\9\x\o\i\5\n\0\v\p\s\1\5\c\m\l\n\q\5\h\z\i\g\w\e\r\8\k\5\o\r\i\i\g\e\d\n\p\l\b\j\n\a\l\9\6\q\h\u\8\o\a\5\8\6\a\6\2\v\d\m\o\3\7\n\h\g\t\o\a\3\6\n\k\0\r\r\k\f\n\m\6\p\9\0\b\c\v\f\9\f\z\p\c\f\m\o\2\3\q\y\p\d\w\s\b\6\h\a\1\j\q\u\n\g\a\6\l\j\z\4\n\c\o\p\8\3\j\g\i\n\c\n\5\u\o\a\s\b\a\t\j\m\p\j\h\1\2\l\q\1\c\v\i\a\x\7\h\2\w\b\p\e\7\7\z\a\y\n\s\2\6\0\h\f\t\a\x\o\e\j\4\6\e\q\v ]] 00:07:12.623 22:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.623 22:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.881 [2024-07-15 22:33:30.496279] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:12.881 [2024-07-15 22:33:30.496390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:07:12.881 [2024-07-15 22:33:30.635082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.139 [2024-07-15 22:33:30.756370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.139 [2024-07-15 22:33:30.809509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.397  Copying: 512/512 [B] (average 500 kBps) 00:07:13.397 00:07:13.397 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r9a24btocogi606abwoz2lkqczu5aehq49b2il93d1enfpls9xinsm4wt7n4ead9e4r8q5olxziqk7psgtwjf40lo2xt2emlfw5qd7ypzwwmigrpg2kzlarue6pm3aium1b5swpn2coqbg9ed5sz4gqmjmefxhxgylkyys2r7gu3paac3q6zk4qcqn3fs1hazxhj7a56xx44ly9dlpcbixoax12acaplk2be5g24yp0ykrmi7btad8svs5p3m0nllvo4yf6a79supof7wi25czkmqn3tl4sngv1mrpz9ukn5dehbt2x828g5a1e5b8b5tfxx5iz9xoi5n0vps15cmlnq5hzigwer8k5oriigednplbjnal96qhu8oa586a62vdmo37nhgtoa36nk0rrkfnm6p90bcvf9fzpcfmo23qypdwsb6ha1jqunga6ljz4ncop83jgincn5uoasbatjmpjh12lq1cviax7h2wbpe77zayns260hftaxoej46eqv == \r\9\a\2\4\b\t\o\c\o\g\i\6\0\6\a\b\w\o\z\2\l\k\q\c\z\u\5\a\e\h\q\4\9\b\2\i\l\9\3\d\1\e\n\f\p\l\s\9\x\i\n\s\m\4\w\t\7\n\4\e\a\d\9\e\4\r\8\q\5\o\l\x\z\i\q\k\7\p\s\g\t\w\j\f\4\0\l\o\2\x\t\2\e\m\l\f\w\5\q\d\7\y\p\z\w\w\m\i\g\r\p\g\2\k\z\l\a\r\u\e\6\p\m\3\a\i\u\m\1\b\5\s\w\p\n\2\c\o\q\b\g\9\e\d\5\s\z\4\g\q\m\j\m\e\f\x\h\x\g\y\l\k\y\y\s\2\r\7\g\u\3\p\a\a\c\3\q\6\z\k\4\q\c\q\n\3\f\s\1\h\a\z\x\h\j\7\a\5\6\x\x\4\4\l\y\9\d\l\p\c\b\i\x\o\a\x\1\2\a\c\a\p\l\k\2\b\e\5\g\2\4\y\p\0\y\k\r\m\i\7\b\t\a\d\8\s\v\s\5\p\3\m\0\n\l\l\v\o\4\y\f\6\a\7\9\s\u\p\o\f\7\w\i\2\5\c\z\k\m\q\n\3\t\l\4\s\n\g\v\1\m\r\p\z\9\u\k\n\5\d\e\h\b\t\2\x\8\2\8\g\5\a\1\e\5\b\8\b\5\t\f\x\x\5\i\z\9\x\o\i\5\n\0\v\p\s\1\5\c\m\l\n\q\5\h\z\i\g\w\e\r\8\k\5\o\r\i\i\g\e\d\n\p\l\b\j\n\a\l\9\6\q\h\u\8\o\a\5\8\6\a\6\2\v\d\m\o\3\7\n\h\g\t\o\a\3\6\n\k\0\r\r\k\f\n\m\6\p\9\0\b\c\v\f\9\f\z\p\c\f\m\o\2\3\q\y\p\d\w\s\b\6\h\a\1\j\q\u\n\g\a\6\l\j\z\4\n\c\o\p\8\3\j\g\i\n\c\n\5\u\o\a\s\b\a\t\j\m\p\j\h\1\2\l\q\1\c\v\i\a\x\7\h\2\w\b\p\e\7\7\z\a\y\n\s\2\6\0\h\f\t\a\x\o\e\j\4\6\e\q\v ]] 00:07:13.397 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.397 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.397 [2024-07-15 22:33:31.120037] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:13.397 [2024-07-15 22:33:31.120151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63282 ] 00:07:13.655 [2024-07-15 22:33:31.255314] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.655 [2024-07-15 22:33:31.374178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.655 [2024-07-15 22:33:31.427974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.913  Copying: 512/512 [B] (average 125 kBps) 00:07:13.913 00:07:13.914 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r9a24btocogi606abwoz2lkqczu5aehq49b2il93d1enfpls9xinsm4wt7n4ead9e4r8q5olxziqk7psgtwjf40lo2xt2emlfw5qd7ypzwwmigrpg2kzlarue6pm3aium1b5swpn2coqbg9ed5sz4gqmjmefxhxgylkyys2r7gu3paac3q6zk4qcqn3fs1hazxhj7a56xx44ly9dlpcbixoax12acaplk2be5g24yp0ykrmi7btad8svs5p3m0nllvo4yf6a79supof7wi25czkmqn3tl4sngv1mrpz9ukn5dehbt2x828g5a1e5b8b5tfxx5iz9xoi5n0vps15cmlnq5hzigwer8k5oriigednplbjnal96qhu8oa586a62vdmo37nhgtoa36nk0rrkfnm6p90bcvf9fzpcfmo23qypdwsb6ha1jqunga6ljz4ncop83jgincn5uoasbatjmpjh12lq1cviax7h2wbpe77zayns260hftaxoej46eqv == \r\9\a\2\4\b\t\o\c\o\g\i\6\0\6\a\b\w\o\z\2\l\k\q\c\z\u\5\a\e\h\q\4\9\b\2\i\l\9\3\d\1\e\n\f\p\l\s\9\x\i\n\s\m\4\w\t\7\n\4\e\a\d\9\e\4\r\8\q\5\o\l\x\z\i\q\k\7\p\s\g\t\w\j\f\4\0\l\o\2\x\t\2\e\m\l\f\w\5\q\d\7\y\p\z\w\w\m\i\g\r\p\g\2\k\z\l\a\r\u\e\6\p\m\3\a\i\u\m\1\b\5\s\w\p\n\2\c\o\q\b\g\9\e\d\5\s\z\4\g\q\m\j\m\e\f\x\h\x\g\y\l\k\y\y\s\2\r\7\g\u\3\p\a\a\c\3\q\6\z\k\4\q\c\q\n\3\f\s\1\h\a\z\x\h\j\7\a\5\6\x\x\4\4\l\y\9\d\l\p\c\b\i\x\o\a\x\1\2\a\c\a\p\l\k\2\b\e\5\g\2\4\y\p\0\y\k\r\m\i\7\b\t\a\d\8\s\v\s\5\p\3\m\0\n\l\l\v\o\4\y\f\6\a\7\9\s\u\p\o\f\7\w\i\2\5\c\z\k\m\q\n\3\t\l\4\s\n\g\v\1\m\r\p\z\9\u\k\n\5\d\e\h\b\t\2\x\8\2\8\g\5\a\1\e\5\b\8\b\5\t\f\x\x\5\i\z\9\x\o\i\5\n\0\v\p\s\1\5\c\m\l\n\q\5\h\z\i\g\w\e\r\8\k\5\o\r\i\i\g\e\d\n\p\l\b\j\n\a\l\9\6\q\h\u\8\o\a\5\8\6\a\6\2\v\d\m\o\3\7\n\h\g\t\o\a\3\6\n\k\0\r\r\k\f\n\m\6\p\9\0\b\c\v\f\9\f\z\p\c\f\m\o\2\3\q\y\p\d\w\s\b\6\h\a\1\j\q\u\n\g\a\6\l\j\z\4\n\c\o\p\8\3\j\g\i\n\c\n\5\u\o\a\s\b\a\t\j\m\p\j\h\1\2\l\q\1\c\v\i\a\x\7\h\2\w\b\p\e\7\7\z\a\y\n\s\2\6\0\h\f\t\a\x\o\e\j\4\6\e\q\v ]] 00:07:13.914 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.914 22:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.172 [2024-07-15 22:33:31.764055] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:14.172 [2024-07-15 22:33:31.764156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:07:14.172 [2024-07-15 22:33:31.899416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.431 [2024-07-15 22:33:32.017634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.431 [2024-07-15 22:33:32.070851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.688  Copying: 512/512 [B] (average 500 kBps) 00:07:14.688 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r9a24btocogi606abwoz2lkqczu5aehq49b2il93d1enfpls9xinsm4wt7n4ead9e4r8q5olxziqk7psgtwjf40lo2xt2emlfw5qd7ypzwwmigrpg2kzlarue6pm3aium1b5swpn2coqbg9ed5sz4gqmjmefxhxgylkyys2r7gu3paac3q6zk4qcqn3fs1hazxhj7a56xx44ly9dlpcbixoax12acaplk2be5g24yp0ykrmi7btad8svs5p3m0nllvo4yf6a79supof7wi25czkmqn3tl4sngv1mrpz9ukn5dehbt2x828g5a1e5b8b5tfxx5iz9xoi5n0vps15cmlnq5hzigwer8k5oriigednplbjnal96qhu8oa586a62vdmo37nhgtoa36nk0rrkfnm6p90bcvf9fzpcfmo23qypdwsb6ha1jqunga6ljz4ncop83jgincn5uoasbatjmpjh12lq1cviax7h2wbpe77zayns260hftaxoej46eqv == \r\9\a\2\4\b\t\o\c\o\g\i\6\0\6\a\b\w\o\z\2\l\k\q\c\z\u\5\a\e\h\q\4\9\b\2\i\l\9\3\d\1\e\n\f\p\l\s\9\x\i\n\s\m\4\w\t\7\n\4\e\a\d\9\e\4\r\8\q\5\o\l\x\z\i\q\k\7\p\s\g\t\w\j\f\4\0\l\o\2\x\t\2\e\m\l\f\w\5\q\d\7\y\p\z\w\w\m\i\g\r\p\g\2\k\z\l\a\r\u\e\6\p\m\3\a\i\u\m\1\b\5\s\w\p\n\2\c\o\q\b\g\9\e\d\5\s\z\4\g\q\m\j\m\e\f\x\h\x\g\y\l\k\y\y\s\2\r\7\g\u\3\p\a\a\c\3\q\6\z\k\4\q\c\q\n\3\f\s\1\h\a\z\x\h\j\7\a\5\6\x\x\4\4\l\y\9\d\l\p\c\b\i\x\o\a\x\1\2\a\c\a\p\l\k\2\b\e\5\g\2\4\y\p\0\y\k\r\m\i\7\b\t\a\d\8\s\v\s\5\p\3\m\0\n\l\l\v\o\4\y\f\6\a\7\9\s\u\p\o\f\7\w\i\2\5\c\z\k\m\q\n\3\t\l\4\s\n\g\v\1\m\r\p\z\9\u\k\n\5\d\e\h\b\t\2\x\8\2\8\g\5\a\1\e\5\b\8\b\5\t\f\x\x\5\i\z\9\x\o\i\5\n\0\v\p\s\1\5\c\m\l\n\q\5\h\z\i\g\w\e\r\8\k\5\o\r\i\i\g\e\d\n\p\l\b\j\n\a\l\9\6\q\h\u\8\o\a\5\8\6\a\6\2\v\d\m\o\3\7\n\h\g\t\o\a\3\6\n\k\0\r\r\k\f\n\m\6\p\9\0\b\c\v\f\9\f\z\p\c\f\m\o\2\3\q\y\p\d\w\s\b\6\h\a\1\j\q\u\n\g\a\6\l\j\z\4\n\c\o\p\8\3\j\g\i\n\c\n\5\u\o\a\s\b\a\t\j\m\p\j\h\1\2\l\q\1\c\v\i\a\x\7\h\2\w\b\p\e\7\7\z\a\y\n\s\2\6\0\h\f\t\a\x\o\e\j\4\6\e\q\v ]] 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.688 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:14.688 [2024-07-15 22:33:32.389719] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:14.688 [2024-07-15 22:33:32.389836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:07:14.947 [2024-07-15 22:33:32.528043] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.947 [2024-07-15 22:33:32.644354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.947 [2024-07-15 22:33:32.697449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.205  Copying: 512/512 [B] (average 500 kBps) 00:07:15.205 00:07:15.206 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9h0rg3jt8w9lkx9p6vywtv9w3mqf4nodx8fkpoffiot52k3fwxy9ww735ah2xhjvts8sb45we2ng8grqmew7ozldbj9x1k2me23gx3cbplt9a9p2bvg6l6j4tkf2cm3q0r6y55eo9uwi8o1xv3azbnh5k4uj7guqvwaijunpg64g9vo2998ayt4v19ssy4f01wrlvfie5xw30z92s1vx4iuugd2kndsnk3sqop7h7wjzu3p9sx40jp38kr9xele6ynzqdqe1jbejgmyshh8wlsvd9zu5xmkplxfz76qjsyv7qzemps3aul6b9jcrlhnl5t2s2o3y9hgu1nfbhcp9rcfzimc7ovo8xwhrqss4va8r29y54gscmb84gqf7ic8onw3ju788i1sj3ktqboxluh6byk42mc9p7zs8e7vk6vxz7hkgil7ybza7capj01tmzieklb4dhgfwd9o6rj1n1turukq0f43sl6hz5q7m23mbemqol7dk74dqipbucl3y == \9\h\0\r\g\3\j\t\8\w\9\l\k\x\9\p\6\v\y\w\t\v\9\w\3\m\q\f\4\n\o\d\x\8\f\k\p\o\f\f\i\o\t\5\2\k\3\f\w\x\y\9\w\w\7\3\5\a\h\2\x\h\j\v\t\s\8\s\b\4\5\w\e\2\n\g\8\g\r\q\m\e\w\7\o\z\l\d\b\j\9\x\1\k\2\m\e\2\3\g\x\3\c\b\p\l\t\9\a\9\p\2\b\v\g\6\l\6\j\4\t\k\f\2\c\m\3\q\0\r\6\y\5\5\e\o\9\u\w\i\8\o\1\x\v\3\a\z\b\n\h\5\k\4\u\j\7\g\u\q\v\w\a\i\j\u\n\p\g\6\4\g\9\v\o\2\9\9\8\a\y\t\4\v\1\9\s\s\y\4\f\0\1\w\r\l\v\f\i\e\5\x\w\3\0\z\9\2\s\1\v\x\4\i\u\u\g\d\2\k\n\d\s\n\k\3\s\q\o\p\7\h\7\w\j\z\u\3\p\9\s\x\4\0\j\p\3\8\k\r\9\x\e\l\e\6\y\n\z\q\d\q\e\1\j\b\e\j\g\m\y\s\h\h\8\w\l\s\v\d\9\z\u\5\x\m\k\p\l\x\f\z\7\6\q\j\s\y\v\7\q\z\e\m\p\s\3\a\u\l\6\b\9\j\c\r\l\h\n\l\5\t\2\s\2\o\3\y\9\h\g\u\1\n\f\b\h\c\p\9\r\c\f\z\i\m\c\7\o\v\o\8\x\w\h\r\q\s\s\4\v\a\8\r\2\9\y\5\4\g\s\c\m\b\8\4\g\q\f\7\i\c\8\o\n\w\3\j\u\7\8\8\i\1\s\j\3\k\t\q\b\o\x\l\u\h\6\b\y\k\4\2\m\c\9\p\7\z\s\8\e\7\v\k\6\v\x\z\7\h\k\g\i\l\7\y\b\z\a\7\c\a\p\j\0\1\t\m\z\i\e\k\l\b\4\d\h\g\f\w\d\9\o\6\r\j\1\n\1\t\u\r\u\k\q\0\f\4\3\s\l\6\h\z\5\q\7\m\2\3\m\b\e\m\q\o\l\7\d\k\7\4\d\q\i\p\b\u\c\l\3\y ]] 00:07:15.206 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.206 22:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:15.206 [2024-07-15 22:33:32.989985] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:15.206 [2024-07-15 22:33:32.990071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:07:15.464 [2024-07-15 22:33:33.123772] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.464 [2024-07-15 22:33:33.241866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.464 [2024-07-15 22:33:33.295463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.980  Copying: 512/512 [B] (average 500 kBps) 00:07:15.980 00:07:15.980 22:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9h0rg3jt8w9lkx9p6vywtv9w3mqf4nodx8fkpoffiot52k3fwxy9ww735ah2xhjvts8sb45we2ng8grqmew7ozldbj9x1k2me23gx3cbplt9a9p2bvg6l6j4tkf2cm3q0r6y55eo9uwi8o1xv3azbnh5k4uj7guqvwaijunpg64g9vo2998ayt4v19ssy4f01wrlvfie5xw30z92s1vx4iuugd2kndsnk3sqop7h7wjzu3p9sx40jp38kr9xele6ynzqdqe1jbejgmyshh8wlsvd9zu5xmkplxfz76qjsyv7qzemps3aul6b9jcrlhnl5t2s2o3y9hgu1nfbhcp9rcfzimc7ovo8xwhrqss4va8r29y54gscmb84gqf7ic8onw3ju788i1sj3ktqboxluh6byk42mc9p7zs8e7vk6vxz7hkgil7ybza7capj01tmzieklb4dhgfwd9o6rj1n1turukq0f43sl6hz5q7m23mbemqol7dk74dqipbucl3y == \9\h\0\r\g\3\j\t\8\w\9\l\k\x\9\p\6\v\y\w\t\v\9\w\3\m\q\f\4\n\o\d\x\8\f\k\p\o\f\f\i\o\t\5\2\k\3\f\w\x\y\9\w\w\7\3\5\a\h\2\x\h\j\v\t\s\8\s\b\4\5\w\e\2\n\g\8\g\r\q\m\e\w\7\o\z\l\d\b\j\9\x\1\k\2\m\e\2\3\g\x\3\c\b\p\l\t\9\a\9\p\2\b\v\g\6\l\6\j\4\t\k\f\2\c\m\3\q\0\r\6\y\5\5\e\o\9\u\w\i\8\o\1\x\v\3\a\z\b\n\h\5\k\4\u\j\7\g\u\q\v\w\a\i\j\u\n\p\g\6\4\g\9\v\o\2\9\9\8\a\y\t\4\v\1\9\s\s\y\4\f\0\1\w\r\l\v\f\i\e\5\x\w\3\0\z\9\2\s\1\v\x\4\i\u\u\g\d\2\k\n\d\s\n\k\3\s\q\o\p\7\h\7\w\j\z\u\3\p\9\s\x\4\0\j\p\3\8\k\r\9\x\e\l\e\6\y\n\z\q\d\q\e\1\j\b\e\j\g\m\y\s\h\h\8\w\l\s\v\d\9\z\u\5\x\m\k\p\l\x\f\z\7\6\q\j\s\y\v\7\q\z\e\m\p\s\3\a\u\l\6\b\9\j\c\r\l\h\n\l\5\t\2\s\2\o\3\y\9\h\g\u\1\n\f\b\h\c\p\9\r\c\f\z\i\m\c\7\o\v\o\8\x\w\h\r\q\s\s\4\v\a\8\r\2\9\y\5\4\g\s\c\m\b\8\4\g\q\f\7\i\c\8\o\n\w\3\j\u\7\8\8\i\1\s\j\3\k\t\q\b\o\x\l\u\h\6\b\y\k\4\2\m\c\9\p\7\z\s\8\e\7\v\k\6\v\x\z\7\h\k\g\i\l\7\y\b\z\a\7\c\a\p\j\0\1\t\m\z\i\e\k\l\b\4\d\h\g\f\w\d\9\o\6\r\j\1\n\1\t\u\r\u\k\q\0\f\4\3\s\l\6\h\z\5\q\7\m\2\3\m\b\e\m\q\o\l\7\d\k\7\4\d\q\i\p\b\u\c\l\3\y ]] 00:07:15.980 22:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.980 22:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:15.980 [2024-07-15 22:33:33.630980] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:15.980 [2024-07-15 22:33:33.631091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63320 ] 00:07:15.980 [2024-07-15 22:33:33.768082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.238 [2024-07-15 22:33:33.888129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.238 [2024-07-15 22:33:33.941514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.495  Copying: 512/512 [B] (average 250 kBps) 00:07:16.495 00:07:16.496 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9h0rg3jt8w9lkx9p6vywtv9w3mqf4nodx8fkpoffiot52k3fwxy9ww735ah2xhjvts8sb45we2ng8grqmew7ozldbj9x1k2me23gx3cbplt9a9p2bvg6l6j4tkf2cm3q0r6y55eo9uwi8o1xv3azbnh5k4uj7guqvwaijunpg64g9vo2998ayt4v19ssy4f01wrlvfie5xw30z92s1vx4iuugd2kndsnk3sqop7h7wjzu3p9sx40jp38kr9xele6ynzqdqe1jbejgmyshh8wlsvd9zu5xmkplxfz76qjsyv7qzemps3aul6b9jcrlhnl5t2s2o3y9hgu1nfbhcp9rcfzimc7ovo8xwhrqss4va8r29y54gscmb84gqf7ic8onw3ju788i1sj3ktqboxluh6byk42mc9p7zs8e7vk6vxz7hkgil7ybza7capj01tmzieklb4dhgfwd9o6rj1n1turukq0f43sl6hz5q7m23mbemqol7dk74dqipbucl3y == \9\h\0\r\g\3\j\t\8\w\9\l\k\x\9\p\6\v\y\w\t\v\9\w\3\m\q\f\4\n\o\d\x\8\f\k\p\o\f\f\i\o\t\5\2\k\3\f\w\x\y\9\w\w\7\3\5\a\h\2\x\h\j\v\t\s\8\s\b\4\5\w\e\2\n\g\8\g\r\q\m\e\w\7\o\z\l\d\b\j\9\x\1\k\2\m\e\2\3\g\x\3\c\b\p\l\t\9\a\9\p\2\b\v\g\6\l\6\j\4\t\k\f\2\c\m\3\q\0\r\6\y\5\5\e\o\9\u\w\i\8\o\1\x\v\3\a\z\b\n\h\5\k\4\u\j\7\g\u\q\v\w\a\i\j\u\n\p\g\6\4\g\9\v\o\2\9\9\8\a\y\t\4\v\1\9\s\s\y\4\f\0\1\w\r\l\v\f\i\e\5\x\w\3\0\z\9\2\s\1\v\x\4\i\u\u\g\d\2\k\n\d\s\n\k\3\s\q\o\p\7\h\7\w\j\z\u\3\p\9\s\x\4\0\j\p\3\8\k\r\9\x\e\l\e\6\y\n\z\q\d\q\e\1\j\b\e\j\g\m\y\s\h\h\8\w\l\s\v\d\9\z\u\5\x\m\k\p\l\x\f\z\7\6\q\j\s\y\v\7\q\z\e\m\p\s\3\a\u\l\6\b\9\j\c\r\l\h\n\l\5\t\2\s\2\o\3\y\9\h\g\u\1\n\f\b\h\c\p\9\r\c\f\z\i\m\c\7\o\v\o\8\x\w\h\r\q\s\s\4\v\a\8\r\2\9\y\5\4\g\s\c\m\b\8\4\g\q\f\7\i\c\8\o\n\w\3\j\u\7\8\8\i\1\s\j\3\k\t\q\b\o\x\l\u\h\6\b\y\k\4\2\m\c\9\p\7\z\s\8\e\7\v\k\6\v\x\z\7\h\k\g\i\l\7\y\b\z\a\7\c\a\p\j\0\1\t\m\z\i\e\k\l\b\4\d\h\g\f\w\d\9\o\6\r\j\1\n\1\t\u\r\u\k\q\0\f\4\3\s\l\6\h\z\5\q\7\m\2\3\m\b\e\m\q\o\l\7\d\k\7\4\d\q\i\p\b\u\c\l\3\y ]] 00:07:16.496 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.496 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:16.496 [2024-07-15 22:33:34.248935] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:16.496 [2024-07-15 22:33:34.249062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:07:16.753 [2024-07-15 22:33:34.386073] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.753 [2024-07-15 22:33:34.504605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.753 [2024-07-15 22:33:34.557338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.012  Copying: 512/512 [B] (average 250 kBps) 00:07:17.012 00:07:17.012 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9h0rg3jt8w9lkx9p6vywtv9w3mqf4nodx8fkpoffiot52k3fwxy9ww735ah2xhjvts8sb45we2ng8grqmew7ozldbj9x1k2me23gx3cbplt9a9p2bvg6l6j4tkf2cm3q0r6y55eo9uwi8o1xv3azbnh5k4uj7guqvwaijunpg64g9vo2998ayt4v19ssy4f01wrlvfie5xw30z92s1vx4iuugd2kndsnk3sqop7h7wjzu3p9sx40jp38kr9xele6ynzqdqe1jbejgmyshh8wlsvd9zu5xmkplxfz76qjsyv7qzemps3aul6b9jcrlhnl5t2s2o3y9hgu1nfbhcp9rcfzimc7ovo8xwhrqss4va8r29y54gscmb84gqf7ic8onw3ju788i1sj3ktqboxluh6byk42mc9p7zs8e7vk6vxz7hkgil7ybza7capj01tmzieklb4dhgfwd9o6rj1n1turukq0f43sl6hz5q7m23mbemqol7dk74dqipbucl3y == \9\h\0\r\g\3\j\t\8\w\9\l\k\x\9\p\6\v\y\w\t\v\9\w\3\m\q\f\4\n\o\d\x\8\f\k\p\o\f\f\i\o\t\5\2\k\3\f\w\x\y\9\w\w\7\3\5\a\h\2\x\h\j\v\t\s\8\s\b\4\5\w\e\2\n\g\8\g\r\q\m\e\w\7\o\z\l\d\b\j\9\x\1\k\2\m\e\2\3\g\x\3\c\b\p\l\t\9\a\9\p\2\b\v\g\6\l\6\j\4\t\k\f\2\c\m\3\q\0\r\6\y\5\5\e\o\9\u\w\i\8\o\1\x\v\3\a\z\b\n\h\5\k\4\u\j\7\g\u\q\v\w\a\i\j\u\n\p\g\6\4\g\9\v\o\2\9\9\8\a\y\t\4\v\1\9\s\s\y\4\f\0\1\w\r\l\v\f\i\e\5\x\w\3\0\z\9\2\s\1\v\x\4\i\u\u\g\d\2\k\n\d\s\n\k\3\s\q\o\p\7\h\7\w\j\z\u\3\p\9\s\x\4\0\j\p\3\8\k\r\9\x\e\l\e\6\y\n\z\q\d\q\e\1\j\b\e\j\g\m\y\s\h\h\8\w\l\s\v\d\9\z\u\5\x\m\k\p\l\x\f\z\7\6\q\j\s\y\v\7\q\z\e\m\p\s\3\a\u\l\6\b\9\j\c\r\l\h\n\l\5\t\2\s\2\o\3\y\9\h\g\u\1\n\f\b\h\c\p\9\r\c\f\z\i\m\c\7\o\v\o\8\x\w\h\r\q\s\s\4\v\a\8\r\2\9\y\5\4\g\s\c\m\b\8\4\g\q\f\7\i\c\8\o\n\w\3\j\u\7\8\8\i\1\s\j\3\k\t\q\b\o\x\l\u\h\6\b\y\k\4\2\m\c\9\p\7\z\s\8\e\7\v\k\6\v\x\z\7\h\k\g\i\l\7\y\b\z\a\7\c\a\p\j\0\1\t\m\z\i\e\k\l\b\4\d\h\g\f\w\d\9\o\6\r\j\1\n\1\t\u\r\u\k\q\0\f\4\3\s\l\6\h\z\5\q\7\m\2\3\m\b\e\m\q\o\l\7\d\k\7\4\d\q\i\p\b\u\c\l\3\y ]] 00:07:17.012 00:07:17.012 real 0m5.002s 00:07:17.012 user 0m2.973s 00:07:17.012 sys 0m2.175s 00:07:17.012 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.012 ************************************ 00:07:17.012 22:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:17.012 END TEST dd_flags_misc 00:07:17.012 ************************************ 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:17.271 * Second test run, disabling liburing, forcing AIO 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.271 ************************************ 00:07:17.271 START TEST dd_flag_append_forced_aio 00:07:17.271 ************************************ 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=30qhrsifwlxhmty7w1x1wnkswq8633we 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=n4yyklhcgpkpaqvf1etadk0tkm2hi6gn 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 30qhrsifwlxhmty7w1x1wnkswq8633we 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s n4yyklhcgpkpaqvf1etadk0tkm2hi6gn 00:07:17.271 22:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:17.271 [2024-07-15 22:33:34.940541] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:17.271 [2024-07-15 22:33:34.940672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:07:17.271 [2024-07-15 22:33:35.079094] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.529 [2024-07-15 22:33:35.198643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.530 [2024-07-15 22:33:35.252545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.788  Copying: 32/32 [B] (average 31 kBps) 00:07:17.788 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ n4yyklhcgpkpaqvf1etadk0tkm2hi6gn30qhrsifwlxhmty7w1x1wnkswq8633we == \n\4\y\y\k\l\h\c\g\p\k\p\a\q\v\f\1\e\t\a\d\k\0\t\k\m\2\h\i\6\g\n\3\0\q\h\r\s\i\f\w\l\x\h\m\t\y\7\w\1\x\1\w\n\k\s\w\q\8\6\3\3\w\e ]] 00:07:17.788 00:07:17.788 real 0m0.649s 00:07:17.788 user 0m0.381s 00:07:17.788 sys 0m0.147s 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.788 ************************************ 00:07:17.788 END TEST dd_flag_append_forced_aio 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.788 ************************************ 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.788 ************************************ 00:07:17.788 START TEST dd_flag_directory_forced_aio 00:07:17.788 ************************************ 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.788 22:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.047 [2024-07-15 22:33:35.631563] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:18.047 [2024-07-15 22:33:35.631685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63390 ] 00:07:18.047 [2024-07-15 22:33:35.771268] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.306 [2024-07-15 22:33:35.892029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.306 [2024-07-15 22:33:35.946443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.306 [2024-07-15 22:33:35.982520] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.306 [2024-07-15 22:33:35.982596] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.306 [2024-07-15 22:33:35.982626] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.306 [2024-07-15 22:33:36.096269] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.565 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.565 [2024-07-15 22:33:36.285845] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:18.565 [2024-07-15 22:33:36.286010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:07:18.824 [2024-07-15 22:33:36.423349] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.824 [2024-07-15 22:33:36.541298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.824 [2024-07-15 22:33:36.594415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.824 [2024-07-15 22:33:36.629083] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.824 [2024-07-15 22:33:36.629156] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.824 [2024-07-15 22:33:36.629171] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.082 [2024-07-15 22:33:36.746129] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.082 00:07:19.082 real 0m1.291s 00:07:19.082 user 0m0.782s 00:07:19.082 sys 0m0.297s 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:19.082 ************************************ 00:07:19.082 END TEST dd_flag_directory_forced_aio 00:07:19.082 ************************************ 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.082 22:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.340 ************************************ 00:07:19.340 START TEST dd_flag_nofollow_forced_aio 00:07:19.340 ************************************ 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.340 22:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.340 [2024-07-15 22:33:36.976072] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:19.340 [2024-07-15 22:33:36.976163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:07:19.340 [2024-07-15 22:33:37.108356] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.598 [2024-07-15 22:33:37.239580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.598 [2024-07-15 22:33:37.296428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.598 [2024-07-15 22:33:37.330827] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:19.598 [2024-07-15 22:33:37.330934] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:19.598 [2024-07-15 22:33:37.330967] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.855 [2024-07-15 22:33:37.441497] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.855 22:33:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.855 [2024-07-15 22:33:37.607124] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:19.855 [2024-07-15 22:33:37.607238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63443 ] 00:07:20.113 [2024-07-15 22:33:37.747010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.113 [2024-07-15 22:33:37.860485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.113 [2024-07-15 22:33:37.913888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.372 [2024-07-15 22:33:37.948015] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:20.372 [2024-07-15 22:33:37.948081] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:20.372 [2024-07-15 22:33:37.948098] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.372 [2024-07-15 22:33:38.059802] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:20.372 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.630 [2024-07-15 22:33:38.217649] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:20.630 [2024-07-15 22:33:38.217742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63451 ] 00:07:20.630 [2024-07-15 22:33:38.350031] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.888 [2024-07-15 22:33:38.467191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.888 [2024-07-15 22:33:38.520993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.145  Copying: 512/512 [B] (average 500 kBps) 00:07:21.145 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 4sybyirw440l7m1p379nerrl0lrlfdg4x3tl5hmfvtj3pkwlvp8o2fktkxx82kljjlve07lecb4k61msi8727daq71pf3kily6pm5t2ey9cg9gsacyxr0r9qnmb13jf6yg91ys67i0jz6ls4qwe8ginl3dgsajhg4gose2pb7323tn9g5wu7xvpwf8ws86xjoffsls06wyp9b4u3cagtkvv1zm7za3m3zzaqc6i6mm6vh8ip0caz351w665cs4gzqi7nwl47bz70q12u8r33hn3v2oh2y43ieud5iywgoip79ss2enqq6tz3ah44wnqjqnbt1qw67zhwdq8pp3okppx4zoju0ikxkjpynnf2rb4bkx1v0da7h3unintppdklkohi4ukfjpluvrjvnjh7f4r7uupgiyxlf9sihogyf6sx14bo4up29ymaljox6joyfrh8spzjbve0wlt27y65srziqsi2al5kgu865xskv294wq5imqeebacathro1y0t == \4\s\y\b\y\i\r\w\4\4\0\l\7\m\1\p\3\7\9\n\e\r\r\l\0\l\r\l\f\d\g\4\x\3\t\l\5\h\m\f\v\t\j\3\p\k\w\l\v\p\8\o\2\f\k\t\k\x\x\8\2\k\l\j\j\l\v\e\0\7\l\e\c\b\4\k\6\1\m\s\i\8\7\2\7\d\a\q\7\1\p\f\3\k\i\l\y\6\p\m\5\t\2\e\y\9\c\g\9\g\s\a\c\y\x\r\0\r\9\q\n\m\b\1\3\j\f\6\y\g\9\1\y\s\6\7\i\0\j\z\6\l\s\4\q\w\e\8\g\i\n\l\3\d\g\s\a\j\h\g\4\g\o\s\e\2\p\b\7\3\2\3\t\n\9\g\5\w\u\7\x\v\p\w\f\8\w\s\8\6\x\j\o\f\f\s\l\s\0\6\w\y\p\9\b\4\u\3\c\a\g\t\k\v\v\1\z\m\7\z\a\3\m\3\z\z\a\q\c\6\i\6\m\m\6\v\h\8\i\p\0\c\a\z\3\5\1\w\6\6\5\c\s\4\g\z\q\i\7\n\w\l\4\7\b\z\7\0\q\1\2\u\8\r\3\3\h\n\3\v\2\o\h\2\y\4\3\i\e\u\d\5\i\y\w\g\o\i\p\7\9\s\s\2\e\n\q\q\6\t\z\3\a\h\4\4\w\n\q\j\q\n\b\t\1\q\w\6\7\z\h\w\d\q\8\p\p\3\o\k\p\p\x\4\z\o\j\u\0\i\k\x\k\j\p\y\n\n\f\2\r\b\4\b\k\x\1\v\0\d\a\7\h\3\u\n\i\n\t\p\p\d\k\l\k\o\h\i\4\u\k\f\j\p\l\u\v\r\j\v\n\j\h\7\f\4\r\7\u\u\p\g\i\y\x\l\f\9\s\i\h\o\g\y\f\6\s\x\1\4\b\o\4\u\p\2\9\y\m\a\l\j\o\x\6\j\o\y\f\r\h\8\s\p\z\j\b\v\e\0\w\l\t\2\7\y\6\5\s\r\z\i\q\s\i\2\a\l\5\k\g\u\8\6\5\x\s\k\v\2\9\4\w\q\5\i\m\q\e\e\b\a\c\a\t\h\r\o\1\y\0\t ]] 00:07:21.145 00:07:21.145 real 0m1.887s 00:07:21.145 user 0m1.123s 00:07:21.145 sys 0m0.435s 00:07:21.145 ************************************ 00:07:21.145 END TEST dd_flag_nofollow_forced_aio 00:07:21.145 ************************************ 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.145 ************************************ 00:07:21.145 START TEST dd_flag_noatime_forced_aio 00:07:21.145 ************************************ 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721082818 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721082818 00:07:21.145 22:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:22.078 22:33:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.336 [2024-07-15 22:33:39.935065] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:22.336 [2024-07-15 22:33:39.935197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:07:22.336 [2024-07-15 22:33:40.072308] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.594 [2024-07-15 22:33:40.189121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.594 [2024-07-15 22:33:40.245468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.852  Copying: 512/512 [B] (average 500 kBps) 00:07:22.852 00:07:22.852 22:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.852 22:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721082818 )) 00:07:22.852 22:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.852 22:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721082818 )) 00:07:22.852 22:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.852 [2024-07-15 22:33:40.623182] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:22.852 [2024-07-15 22:33:40.623286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63508 ] 00:07:23.110 [2024-07-15 22:33:40.761790] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.110 [2024-07-15 22:33:40.915277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.369 [2024-07-15 22:33:40.989044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.627  Copying: 512/512 [B] (average 500 kBps) 00:07:23.627 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721082821 )) 00:07:23.627 00:07:23.627 real 0m2.521s 00:07:23.627 user 0m0.921s 00:07:23.627 sys 0m0.354s 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.627 ************************************ 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 END TEST dd_flag_noatime_forced_aio 00:07:23.627 ************************************ 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 ************************************ 00:07:23.627 START TEST dd_flags_misc_forced_aio 00:07:23.627 ************************************ 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.627 22:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:23.885 [2024-07-15 22:33:41.484757] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:23.885 [2024-07-15 22:33:41.484858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:07:23.885 [2024-07-15 22:33:41.618186] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.143 [2024-07-15 22:33:41.773588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.143 [2024-07-15 22:33:41.852186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.710  Copying: 512/512 [B] (average 500 kBps) 00:07:24.710 00:07:24.710 22:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vatgf8wchv0xtx5wlafxops9cshofxan4ffur9zodmo7tjw4u7hh3rl049z6dlea03jtv4q3l8kbk23kg3it23juiv3rg3pl5nrk7yy6g37ji5pbxqursfb2r2pr7jyczqjb5kaew02atnpszro881aoel6139w40osffdinm9egocdskxwrsa3cz3mnwo6gr0sywb00r32ku3ijbfj419oyl9jhqp6g8e23avjpevdxi8haw9na1ctmukahn812y1ovuainf6togmnvkioy3hxxihzrf6r9v4u3sd3huu93yqqbad8sd93jy24utywauxd48igne1mvw4n4rmc0c379y0rsfv8yy1uhvofmy1wbrasnfhzg6z6y4tvzdzoufq338tf9rbhywd2y6jjmqaa7j1tsz7lpclfiqgs8w9ii0kndhcissq1wi94dpbil7fb3cc41a4cjh9oimdzywqxu1taw8433ld8uwbr0n08b15stxq5sxay810lebipg == \v\a\t\g\f\8\w\c\h\v\0\x\t\x\5\w\l\a\f\x\o\p\s\9\c\s\h\o\f\x\a\n\4\f\f\u\r\9\z\o\d\m\o\7\t\j\w\4\u\7\h\h\3\r\l\0\4\9\z\6\d\l\e\a\0\3\j\t\v\4\q\3\l\8\k\b\k\2\3\k\g\3\i\t\2\3\j\u\i\v\3\r\g\3\p\l\5\n\r\k\7\y\y\6\g\3\7\j\i\5\p\b\x\q\u\r\s\f\b\2\r\2\p\r\7\j\y\c\z\q\j\b\5\k\a\e\w\0\2\a\t\n\p\s\z\r\o\8\8\1\a\o\e\l\6\1\3\9\w\4\0\o\s\f\f\d\i\n\m\9\e\g\o\c\d\s\k\x\w\r\s\a\3\c\z\3\m\n\w\o\6\g\r\0\s\y\w\b\0\0\r\3\2\k\u\3\i\j\b\f\j\4\1\9\o\y\l\9\j\h\q\p\6\g\8\e\2\3\a\v\j\p\e\v\d\x\i\8\h\a\w\9\n\a\1\c\t\m\u\k\a\h\n\8\1\2\y\1\o\v\u\a\i\n\f\6\t\o\g\m\n\v\k\i\o\y\3\h\x\x\i\h\z\r\f\6\r\9\v\4\u\3\s\d\3\h\u\u\9\3\y\q\q\b\a\d\8\s\d\9\3\j\y\2\4\u\t\y\w\a\u\x\d\4\8\i\g\n\e\1\m\v\w\4\n\4\r\m\c\0\c\3\7\9\y\0\r\s\f\v\8\y\y\1\u\h\v\o\f\m\y\1\w\b\r\a\s\n\f\h\z\g\6\z\6\y\4\t\v\z\d\z\o\u\f\q\3\3\8\t\f\9\r\b\h\y\w\d\2\y\6\j\j\m\q\a\a\7\j\1\t\s\z\7\l\p\c\l\f\i\q\g\s\8\w\9\i\i\0\k\n\d\h\c\i\s\s\q\1\w\i\9\4\d\p\b\i\l\7\f\b\3\c\c\4\1\a\4\c\j\h\9\o\i\m\d\z\y\w\q\x\u\1\t\a\w\8\4\3\3\l\d\8\u\w\b\r\0\n\0\8\b\1\5\s\t\x\q\5\s\x\a\y\8\1\0\l\e\b\i\p\g ]] 00:07:24.710 22:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.710 22:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:24.710 [2024-07-15 22:33:42.297374] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:24.710 [2024-07-15 22:33:42.297471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63548 ] 00:07:24.710 [2024-07-15 22:33:42.434313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.968 [2024-07-15 22:33:42.623210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.968 [2024-07-15 22:33:42.701290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.534  Copying: 512/512 [B] (average 500 kBps) 00:07:25.534 00:07:25.534 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vatgf8wchv0xtx5wlafxops9cshofxan4ffur9zodmo7tjw4u7hh3rl049z6dlea03jtv4q3l8kbk23kg3it23juiv3rg3pl5nrk7yy6g37ji5pbxqursfb2r2pr7jyczqjb5kaew02atnpszro881aoel6139w40osffdinm9egocdskxwrsa3cz3mnwo6gr0sywb00r32ku3ijbfj419oyl9jhqp6g8e23avjpevdxi8haw9na1ctmukahn812y1ovuainf6togmnvkioy3hxxihzrf6r9v4u3sd3huu93yqqbad8sd93jy24utywauxd48igne1mvw4n4rmc0c379y0rsfv8yy1uhvofmy1wbrasnfhzg6z6y4tvzdzoufq338tf9rbhywd2y6jjmqaa7j1tsz7lpclfiqgs8w9ii0kndhcissq1wi94dpbil7fb3cc41a4cjh9oimdzywqxu1taw8433ld8uwbr0n08b15stxq5sxay810lebipg == \v\a\t\g\f\8\w\c\h\v\0\x\t\x\5\w\l\a\f\x\o\p\s\9\c\s\h\o\f\x\a\n\4\f\f\u\r\9\z\o\d\m\o\7\t\j\w\4\u\7\h\h\3\r\l\0\4\9\z\6\d\l\e\a\0\3\j\t\v\4\q\3\l\8\k\b\k\2\3\k\g\3\i\t\2\3\j\u\i\v\3\r\g\3\p\l\5\n\r\k\7\y\y\6\g\3\7\j\i\5\p\b\x\q\u\r\s\f\b\2\r\2\p\r\7\j\y\c\z\q\j\b\5\k\a\e\w\0\2\a\t\n\p\s\z\r\o\8\8\1\a\o\e\l\6\1\3\9\w\4\0\o\s\f\f\d\i\n\m\9\e\g\o\c\d\s\k\x\w\r\s\a\3\c\z\3\m\n\w\o\6\g\r\0\s\y\w\b\0\0\r\3\2\k\u\3\i\j\b\f\j\4\1\9\o\y\l\9\j\h\q\p\6\g\8\e\2\3\a\v\j\p\e\v\d\x\i\8\h\a\w\9\n\a\1\c\t\m\u\k\a\h\n\8\1\2\y\1\o\v\u\a\i\n\f\6\t\o\g\m\n\v\k\i\o\y\3\h\x\x\i\h\z\r\f\6\r\9\v\4\u\3\s\d\3\h\u\u\9\3\y\q\q\b\a\d\8\s\d\9\3\j\y\2\4\u\t\y\w\a\u\x\d\4\8\i\g\n\e\1\m\v\w\4\n\4\r\m\c\0\c\3\7\9\y\0\r\s\f\v\8\y\y\1\u\h\v\o\f\m\y\1\w\b\r\a\s\n\f\h\z\g\6\z\6\y\4\t\v\z\d\z\o\u\f\q\3\3\8\t\f\9\r\b\h\y\w\d\2\y\6\j\j\m\q\a\a\7\j\1\t\s\z\7\l\p\c\l\f\i\q\g\s\8\w\9\i\i\0\k\n\d\h\c\i\s\s\q\1\w\i\9\4\d\p\b\i\l\7\f\b\3\c\c\4\1\a\4\c\j\h\9\o\i\m\d\z\y\w\q\x\u\1\t\a\w\8\4\3\3\l\d\8\u\w\b\r\0\n\0\8\b\1\5\s\t\x\q\5\s\x\a\y\8\1\0\l\e\b\i\p\g ]] 00:07:25.534 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.534 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:25.534 [2024-07-15 22:33:43.173213] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:25.534 [2024-07-15 22:33:43.173327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63561 ] 00:07:25.534 [2024-07-15 22:33:43.306401] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.791 [2024-07-15 22:33:43.457275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.791 [2024-07-15 22:33:43.533622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.358  Copying: 512/512 [B] (average 166 kBps) 00:07:26.358 00:07:26.358 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vatgf8wchv0xtx5wlafxops9cshofxan4ffur9zodmo7tjw4u7hh3rl049z6dlea03jtv4q3l8kbk23kg3it23juiv3rg3pl5nrk7yy6g37ji5pbxqursfb2r2pr7jyczqjb5kaew02atnpszro881aoel6139w40osffdinm9egocdskxwrsa3cz3mnwo6gr0sywb00r32ku3ijbfj419oyl9jhqp6g8e23avjpevdxi8haw9na1ctmukahn812y1ovuainf6togmnvkioy3hxxihzrf6r9v4u3sd3huu93yqqbad8sd93jy24utywauxd48igne1mvw4n4rmc0c379y0rsfv8yy1uhvofmy1wbrasnfhzg6z6y4tvzdzoufq338tf9rbhywd2y6jjmqaa7j1tsz7lpclfiqgs8w9ii0kndhcissq1wi94dpbil7fb3cc41a4cjh9oimdzywqxu1taw8433ld8uwbr0n08b15stxq5sxay810lebipg == \v\a\t\g\f\8\w\c\h\v\0\x\t\x\5\w\l\a\f\x\o\p\s\9\c\s\h\o\f\x\a\n\4\f\f\u\r\9\z\o\d\m\o\7\t\j\w\4\u\7\h\h\3\r\l\0\4\9\z\6\d\l\e\a\0\3\j\t\v\4\q\3\l\8\k\b\k\2\3\k\g\3\i\t\2\3\j\u\i\v\3\r\g\3\p\l\5\n\r\k\7\y\y\6\g\3\7\j\i\5\p\b\x\q\u\r\s\f\b\2\r\2\p\r\7\j\y\c\z\q\j\b\5\k\a\e\w\0\2\a\t\n\p\s\z\r\o\8\8\1\a\o\e\l\6\1\3\9\w\4\0\o\s\f\f\d\i\n\m\9\e\g\o\c\d\s\k\x\w\r\s\a\3\c\z\3\m\n\w\o\6\g\r\0\s\y\w\b\0\0\r\3\2\k\u\3\i\j\b\f\j\4\1\9\o\y\l\9\j\h\q\p\6\g\8\e\2\3\a\v\j\p\e\v\d\x\i\8\h\a\w\9\n\a\1\c\t\m\u\k\a\h\n\8\1\2\y\1\o\v\u\a\i\n\f\6\t\o\g\m\n\v\k\i\o\y\3\h\x\x\i\h\z\r\f\6\r\9\v\4\u\3\s\d\3\h\u\u\9\3\y\q\q\b\a\d\8\s\d\9\3\j\y\2\4\u\t\y\w\a\u\x\d\4\8\i\g\n\e\1\m\v\w\4\n\4\r\m\c\0\c\3\7\9\y\0\r\s\f\v\8\y\y\1\u\h\v\o\f\m\y\1\w\b\r\a\s\n\f\h\z\g\6\z\6\y\4\t\v\z\d\z\o\u\f\q\3\3\8\t\f\9\r\b\h\y\w\d\2\y\6\j\j\m\q\a\a\7\j\1\t\s\z\7\l\p\c\l\f\i\q\g\s\8\w\9\i\i\0\k\n\d\h\c\i\s\s\q\1\w\i\9\4\d\p\b\i\l\7\f\b\3\c\c\4\1\a\4\c\j\h\9\o\i\m\d\z\y\w\q\x\u\1\t\a\w\8\4\3\3\l\d\8\u\w\b\r\0\n\0\8\b\1\5\s\t\x\q\5\s\x\a\y\8\1\0\l\e\b\i\p\g ]] 00:07:26.358 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.358 22:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:26.358 [2024-07-15 22:33:44.015097] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:26.358 [2024-07-15 22:33:44.015195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63568 ] 00:07:26.358 [2024-07-15 22:33:44.153904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.616 [2024-07-15 22:33:44.314370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.616 [2024-07-15 22:33:44.395501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.183  Copying: 512/512 [B] (average 250 kBps) 00:07:27.183 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vatgf8wchv0xtx5wlafxops9cshofxan4ffur9zodmo7tjw4u7hh3rl049z6dlea03jtv4q3l8kbk23kg3it23juiv3rg3pl5nrk7yy6g37ji5pbxqursfb2r2pr7jyczqjb5kaew02atnpszro881aoel6139w40osffdinm9egocdskxwrsa3cz3mnwo6gr0sywb00r32ku3ijbfj419oyl9jhqp6g8e23avjpevdxi8haw9na1ctmukahn812y1ovuainf6togmnvkioy3hxxihzrf6r9v4u3sd3huu93yqqbad8sd93jy24utywauxd48igne1mvw4n4rmc0c379y0rsfv8yy1uhvofmy1wbrasnfhzg6z6y4tvzdzoufq338tf9rbhywd2y6jjmqaa7j1tsz7lpclfiqgs8w9ii0kndhcissq1wi94dpbil7fb3cc41a4cjh9oimdzywqxu1taw8433ld8uwbr0n08b15stxq5sxay810lebipg == \v\a\t\g\f\8\w\c\h\v\0\x\t\x\5\w\l\a\f\x\o\p\s\9\c\s\h\o\f\x\a\n\4\f\f\u\r\9\z\o\d\m\o\7\t\j\w\4\u\7\h\h\3\r\l\0\4\9\z\6\d\l\e\a\0\3\j\t\v\4\q\3\l\8\k\b\k\2\3\k\g\3\i\t\2\3\j\u\i\v\3\r\g\3\p\l\5\n\r\k\7\y\y\6\g\3\7\j\i\5\p\b\x\q\u\r\s\f\b\2\r\2\p\r\7\j\y\c\z\q\j\b\5\k\a\e\w\0\2\a\t\n\p\s\z\r\o\8\8\1\a\o\e\l\6\1\3\9\w\4\0\o\s\f\f\d\i\n\m\9\e\g\o\c\d\s\k\x\w\r\s\a\3\c\z\3\m\n\w\o\6\g\r\0\s\y\w\b\0\0\r\3\2\k\u\3\i\j\b\f\j\4\1\9\o\y\l\9\j\h\q\p\6\g\8\e\2\3\a\v\j\p\e\v\d\x\i\8\h\a\w\9\n\a\1\c\t\m\u\k\a\h\n\8\1\2\y\1\o\v\u\a\i\n\f\6\t\o\g\m\n\v\k\i\o\y\3\h\x\x\i\h\z\r\f\6\r\9\v\4\u\3\s\d\3\h\u\u\9\3\y\q\q\b\a\d\8\s\d\9\3\j\y\2\4\u\t\y\w\a\u\x\d\4\8\i\g\n\e\1\m\v\w\4\n\4\r\m\c\0\c\3\7\9\y\0\r\s\f\v\8\y\y\1\u\h\v\o\f\m\y\1\w\b\r\a\s\n\f\h\z\g\6\z\6\y\4\t\v\z\d\z\o\u\f\q\3\3\8\t\f\9\r\b\h\y\w\d\2\y\6\j\j\m\q\a\a\7\j\1\t\s\z\7\l\p\c\l\f\i\q\g\s\8\w\9\i\i\0\k\n\d\h\c\i\s\s\q\1\w\i\9\4\d\p\b\i\l\7\f\b\3\c\c\4\1\a\4\c\j\h\9\o\i\m\d\z\y\w\q\x\u\1\t\a\w\8\4\3\3\l\d\8\u\w\b\r\0\n\0\8\b\1\5\s\t\x\q\5\s\x\a\y\8\1\0\l\e\b\i\p\g ]] 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.183 22:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:27.183 [2024-07-15 22:33:44.886126] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:27.183 [2024-07-15 22:33:44.886228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63581 ] 00:07:27.442 [2024-07-15 22:33:45.023851] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.442 [2024-07-15 22:33:45.178071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.442 [2024-07-15 22:33:45.255132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.959  Copying: 512/512 [B] (average 500 kBps) 00:07:27.959 00:07:27.959 22:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iz14n40mhcitprg4mivgthwbif1xvvcl276oit5al2d9trj48ez07jy0dmrrzh5bbfddxr8ufk5iwwty89vv99wvkuxbi3qu7hq7kyo036ayto8esct0fsto9yge2i8lfhglkhqhyvj2bvlwrl7y63x4yf3px4uzmijricplb0a55do4mtr35u55lvswm9b5kd16zmld00faw72qrr2bibl6p2p5kzhdfiu4kxw9ofpp23ajmb5xe0cyprrafxr9dsw927xo0pi1yc0iqawo0qvqisy75obqvgf853qhdjjv2hg0bbne95nyehc4cxexaeja5ztfwursunb4orsy2lkkfxrfc60rzgq1ohftguqkvzxcb8pk2lg2mdcz3vls7qzbw2ckweyj9hmk5ep84m3k6gkz2iout0qysh8mldfxudr4hv1gzu25uabyf1lfgatb43h7lqfypxqv9cdnodwbzf0j4qdw1kaw0ebracot3dkn5au79hq507nl8sgx == \i\z\1\4\n\4\0\m\h\c\i\t\p\r\g\4\m\i\v\g\t\h\w\b\i\f\1\x\v\v\c\l\2\7\6\o\i\t\5\a\l\2\d\9\t\r\j\4\8\e\z\0\7\j\y\0\d\m\r\r\z\h\5\b\b\f\d\d\x\r\8\u\f\k\5\i\w\w\t\y\8\9\v\v\9\9\w\v\k\u\x\b\i\3\q\u\7\h\q\7\k\y\o\0\3\6\a\y\t\o\8\e\s\c\t\0\f\s\t\o\9\y\g\e\2\i\8\l\f\h\g\l\k\h\q\h\y\v\j\2\b\v\l\w\r\l\7\y\6\3\x\4\y\f\3\p\x\4\u\z\m\i\j\r\i\c\p\l\b\0\a\5\5\d\o\4\m\t\r\3\5\u\5\5\l\v\s\w\m\9\b\5\k\d\1\6\z\m\l\d\0\0\f\a\w\7\2\q\r\r\2\b\i\b\l\6\p\2\p\5\k\z\h\d\f\i\u\4\k\x\w\9\o\f\p\p\2\3\a\j\m\b\5\x\e\0\c\y\p\r\r\a\f\x\r\9\d\s\w\9\2\7\x\o\0\p\i\1\y\c\0\i\q\a\w\o\0\q\v\q\i\s\y\7\5\o\b\q\v\g\f\8\5\3\q\h\d\j\j\v\2\h\g\0\b\b\n\e\9\5\n\y\e\h\c\4\c\x\e\x\a\e\j\a\5\z\t\f\w\u\r\s\u\n\b\4\o\r\s\y\2\l\k\k\f\x\r\f\c\6\0\r\z\g\q\1\o\h\f\t\g\u\q\k\v\z\x\c\b\8\p\k\2\l\g\2\m\d\c\z\3\v\l\s\7\q\z\b\w\2\c\k\w\e\y\j\9\h\m\k\5\e\p\8\4\m\3\k\6\g\k\z\2\i\o\u\t\0\q\y\s\h\8\m\l\d\f\x\u\d\r\4\h\v\1\g\z\u\2\5\u\a\b\y\f\1\l\f\g\a\t\b\4\3\h\7\l\q\f\y\p\x\q\v\9\c\d\n\o\d\w\b\z\f\0\j\4\q\d\w\1\k\a\w\0\e\b\r\a\c\o\t\3\d\k\n\5\a\u\7\9\h\q\5\0\7\n\l\8\s\g\x ]] 00:07:27.959 22:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.959 22:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:27.959 [2024-07-15 22:33:45.648413] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:27.959 [2024-07-15 22:33:45.648518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63589 ] 00:07:27.959 [2024-07-15 22:33:45.787230] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.217 [2024-07-15 22:33:45.901529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.217 [2024-07-15 22:33:45.959707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.476  Copying: 512/512 [B] (average 500 kBps) 00:07:28.476 00:07:28.476 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iz14n40mhcitprg4mivgthwbif1xvvcl276oit5al2d9trj48ez07jy0dmrrzh5bbfddxr8ufk5iwwty89vv99wvkuxbi3qu7hq7kyo036ayto8esct0fsto9yge2i8lfhglkhqhyvj2bvlwrl7y63x4yf3px4uzmijricplb0a55do4mtr35u55lvswm9b5kd16zmld00faw72qrr2bibl6p2p5kzhdfiu4kxw9ofpp23ajmb5xe0cyprrafxr9dsw927xo0pi1yc0iqawo0qvqisy75obqvgf853qhdjjv2hg0bbne95nyehc4cxexaeja5ztfwursunb4orsy2lkkfxrfc60rzgq1ohftguqkvzxcb8pk2lg2mdcz3vls7qzbw2ckweyj9hmk5ep84m3k6gkz2iout0qysh8mldfxudr4hv1gzu25uabyf1lfgatb43h7lqfypxqv9cdnodwbzf0j4qdw1kaw0ebracot3dkn5au79hq507nl8sgx == \i\z\1\4\n\4\0\m\h\c\i\t\p\r\g\4\m\i\v\g\t\h\w\b\i\f\1\x\v\v\c\l\2\7\6\o\i\t\5\a\l\2\d\9\t\r\j\4\8\e\z\0\7\j\y\0\d\m\r\r\z\h\5\b\b\f\d\d\x\r\8\u\f\k\5\i\w\w\t\y\8\9\v\v\9\9\w\v\k\u\x\b\i\3\q\u\7\h\q\7\k\y\o\0\3\6\a\y\t\o\8\e\s\c\t\0\f\s\t\o\9\y\g\e\2\i\8\l\f\h\g\l\k\h\q\h\y\v\j\2\b\v\l\w\r\l\7\y\6\3\x\4\y\f\3\p\x\4\u\z\m\i\j\r\i\c\p\l\b\0\a\5\5\d\o\4\m\t\r\3\5\u\5\5\l\v\s\w\m\9\b\5\k\d\1\6\z\m\l\d\0\0\f\a\w\7\2\q\r\r\2\b\i\b\l\6\p\2\p\5\k\z\h\d\f\i\u\4\k\x\w\9\o\f\p\p\2\3\a\j\m\b\5\x\e\0\c\y\p\r\r\a\f\x\r\9\d\s\w\9\2\7\x\o\0\p\i\1\y\c\0\i\q\a\w\o\0\q\v\q\i\s\y\7\5\o\b\q\v\g\f\8\5\3\q\h\d\j\j\v\2\h\g\0\b\b\n\e\9\5\n\y\e\h\c\4\c\x\e\x\a\e\j\a\5\z\t\f\w\u\r\s\u\n\b\4\o\r\s\y\2\l\k\k\f\x\r\f\c\6\0\r\z\g\q\1\o\h\f\t\g\u\q\k\v\z\x\c\b\8\p\k\2\l\g\2\m\d\c\z\3\v\l\s\7\q\z\b\w\2\c\k\w\e\y\j\9\h\m\k\5\e\p\8\4\m\3\k\6\g\k\z\2\i\o\u\t\0\q\y\s\h\8\m\l\d\f\x\u\d\r\4\h\v\1\g\z\u\2\5\u\a\b\y\f\1\l\f\g\a\t\b\4\3\h\7\l\q\f\y\p\x\q\v\9\c\d\n\o\d\w\b\z\f\0\j\4\q\d\w\1\k\a\w\0\e\b\r\a\c\o\t\3\d\k\n\5\a\u\7\9\h\q\5\0\7\n\l\8\s\g\x ]] 00:07:28.477 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.477 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:28.477 [2024-07-15 22:33:46.287016] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:28.477 [2024-07-15 22:33:46.287123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63602 ] 00:07:28.735 [2024-07-15 22:33:46.420010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.735 [2024-07-15 22:33:46.516128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.993 [2024-07-15 22:33:46.573390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.252  Copying: 512/512 [B] (average 500 kBps) 00:07:29.253 00:07:29.253 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iz14n40mhcitprg4mivgthwbif1xvvcl276oit5al2d9trj48ez07jy0dmrrzh5bbfddxr8ufk5iwwty89vv99wvkuxbi3qu7hq7kyo036ayto8esct0fsto9yge2i8lfhglkhqhyvj2bvlwrl7y63x4yf3px4uzmijricplb0a55do4mtr35u55lvswm9b5kd16zmld00faw72qrr2bibl6p2p5kzhdfiu4kxw9ofpp23ajmb5xe0cyprrafxr9dsw927xo0pi1yc0iqawo0qvqisy75obqvgf853qhdjjv2hg0bbne95nyehc4cxexaeja5ztfwursunb4orsy2lkkfxrfc60rzgq1ohftguqkvzxcb8pk2lg2mdcz3vls7qzbw2ckweyj9hmk5ep84m3k6gkz2iout0qysh8mldfxudr4hv1gzu25uabyf1lfgatb43h7lqfypxqv9cdnodwbzf0j4qdw1kaw0ebracot3dkn5au79hq507nl8sgx == \i\z\1\4\n\4\0\m\h\c\i\t\p\r\g\4\m\i\v\g\t\h\w\b\i\f\1\x\v\v\c\l\2\7\6\o\i\t\5\a\l\2\d\9\t\r\j\4\8\e\z\0\7\j\y\0\d\m\r\r\z\h\5\b\b\f\d\d\x\r\8\u\f\k\5\i\w\w\t\y\8\9\v\v\9\9\w\v\k\u\x\b\i\3\q\u\7\h\q\7\k\y\o\0\3\6\a\y\t\o\8\e\s\c\t\0\f\s\t\o\9\y\g\e\2\i\8\l\f\h\g\l\k\h\q\h\y\v\j\2\b\v\l\w\r\l\7\y\6\3\x\4\y\f\3\p\x\4\u\z\m\i\j\r\i\c\p\l\b\0\a\5\5\d\o\4\m\t\r\3\5\u\5\5\l\v\s\w\m\9\b\5\k\d\1\6\z\m\l\d\0\0\f\a\w\7\2\q\r\r\2\b\i\b\l\6\p\2\p\5\k\z\h\d\f\i\u\4\k\x\w\9\o\f\p\p\2\3\a\j\m\b\5\x\e\0\c\y\p\r\r\a\f\x\r\9\d\s\w\9\2\7\x\o\0\p\i\1\y\c\0\i\q\a\w\o\0\q\v\q\i\s\y\7\5\o\b\q\v\g\f\8\5\3\q\h\d\j\j\v\2\h\g\0\b\b\n\e\9\5\n\y\e\h\c\4\c\x\e\x\a\e\j\a\5\z\t\f\w\u\r\s\u\n\b\4\o\r\s\y\2\l\k\k\f\x\r\f\c\6\0\r\z\g\q\1\o\h\f\t\g\u\q\k\v\z\x\c\b\8\p\k\2\l\g\2\m\d\c\z\3\v\l\s\7\q\z\b\w\2\c\k\w\e\y\j\9\h\m\k\5\e\p\8\4\m\3\k\6\g\k\z\2\i\o\u\t\0\q\y\s\h\8\m\l\d\f\x\u\d\r\4\h\v\1\g\z\u\2\5\u\a\b\y\f\1\l\f\g\a\t\b\4\3\h\7\l\q\f\y\p\x\q\v\9\c\d\n\o\d\w\b\z\f\0\j\4\q\d\w\1\k\a\w\0\e\b\r\a\c\o\t\3\d\k\n\5\a\u\7\9\h\q\5\0\7\n\l\8\s\g\x ]] 00:07:29.253 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.253 22:33:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:29.253 [2024-07-15 22:33:46.917047] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:29.253 [2024-07-15 22:33:46.917133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63609 ] 00:07:29.253 [2024-07-15 22:33:47.054869] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.512 [2024-07-15 22:33:47.127174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.512 [2024-07-15 22:33:47.182474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.030  Copying: 512/512 [B] (average 3506 Bps) 00:07:30.030 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iz14n40mhcitprg4mivgthwbif1xvvcl276oit5al2d9trj48ez07jy0dmrrzh5bbfddxr8ufk5iwwty89vv99wvkuxbi3qu7hq7kyo036ayto8esct0fsto9yge2i8lfhglkhqhyvj2bvlwrl7y63x4yf3px4uzmijricplb0a55do4mtr35u55lvswm9b5kd16zmld00faw72qrr2bibl6p2p5kzhdfiu4kxw9ofpp23ajmb5xe0cyprrafxr9dsw927xo0pi1yc0iqawo0qvqisy75obqvgf853qhdjjv2hg0bbne95nyehc4cxexaeja5ztfwursunb4orsy2lkkfxrfc60rzgq1ohftguqkvzxcb8pk2lg2mdcz3vls7qzbw2ckweyj9hmk5ep84m3k6gkz2iout0qysh8mldfxudr4hv1gzu25uabyf1lfgatb43h7lqfypxqv9cdnodwbzf0j4qdw1kaw0ebracot3dkn5au79hq507nl8sgx == \i\z\1\4\n\4\0\m\h\c\i\t\p\r\g\4\m\i\v\g\t\h\w\b\i\f\1\x\v\v\c\l\2\7\6\o\i\t\5\a\l\2\d\9\t\r\j\4\8\e\z\0\7\j\y\0\d\m\r\r\z\h\5\b\b\f\d\d\x\r\8\u\f\k\5\i\w\w\t\y\8\9\v\v\9\9\w\v\k\u\x\b\i\3\q\u\7\h\q\7\k\y\o\0\3\6\a\y\t\o\8\e\s\c\t\0\f\s\t\o\9\y\g\e\2\i\8\l\f\h\g\l\k\h\q\h\y\v\j\2\b\v\l\w\r\l\7\y\6\3\x\4\y\f\3\p\x\4\u\z\m\i\j\r\i\c\p\l\b\0\a\5\5\d\o\4\m\t\r\3\5\u\5\5\l\v\s\w\m\9\b\5\k\d\1\6\z\m\l\d\0\0\f\a\w\7\2\q\r\r\2\b\i\b\l\6\p\2\p\5\k\z\h\d\f\i\u\4\k\x\w\9\o\f\p\p\2\3\a\j\m\b\5\x\e\0\c\y\p\r\r\a\f\x\r\9\d\s\w\9\2\7\x\o\0\p\i\1\y\c\0\i\q\a\w\o\0\q\v\q\i\s\y\7\5\o\b\q\v\g\f\8\5\3\q\h\d\j\j\v\2\h\g\0\b\b\n\e\9\5\n\y\e\h\c\4\c\x\e\x\a\e\j\a\5\z\t\f\w\u\r\s\u\n\b\4\o\r\s\y\2\l\k\k\f\x\r\f\c\6\0\r\z\g\q\1\o\h\f\t\g\u\q\k\v\z\x\c\b\8\p\k\2\l\g\2\m\d\c\z\3\v\l\s\7\q\z\b\w\2\c\k\w\e\y\j\9\h\m\k\5\e\p\8\4\m\3\k\6\g\k\z\2\i\o\u\t\0\q\y\s\h\8\m\l\d\f\x\u\d\r\4\h\v\1\g\z\u\2\5\u\a\b\y\f\1\l\f\g\a\t\b\4\3\h\7\l\q\f\y\p\x\q\v\9\c\d\n\o\d\w\b\z\f\0\j\4\q\d\w\1\k\a\w\0\e\b\r\a\c\o\t\3\d\k\n\5\a\u\7\9\h\q\5\0\7\n\l\8\s\g\x ]] 00:07:30.031 00:07:30.031 real 0m6.212s 00:07:30.031 user 0m3.612s 00:07:30.031 sys 0m1.461s 00:07:30.031 ************************************ 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.031 END TEST dd_flags_misc_forced_aio 00:07:30.031 ************************************ 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:30.031 ************************************ 00:07:30.031 END TEST spdk_dd_posix 00:07:30.031 ************************************ 00:07:30.031 00:07:30.031 real 0m24.257s 00:07:30.031 user 0m12.954s 00:07:30.031 sys 0m6.990s 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.031 22:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.031 22:33:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:30.031 22:33:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:30.031 22:33:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.031 22:33:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.031 22:33:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.031 ************************************ 00:07:30.031 START TEST spdk_dd_malloc 00:07:30.031 ************************************ 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:30.031 * Looking for test storage... 00:07:30.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:30.031 ************************************ 00:07:30.031 START TEST dd_malloc_copy 00:07:30.031 ************************************ 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:30.031 22:33:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.289 [2024-07-15 22:33:47.882173] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:30.289 [2024-07-15 22:33:47.882277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63683 ] 00:07:30.289 { 00:07:30.289 "subsystems": [ 00:07:30.289 { 00:07:30.289 "subsystem": "bdev", 00:07:30.289 "config": [ 00:07:30.289 { 00:07:30.289 "params": { 00:07:30.289 "block_size": 512, 00:07:30.289 "num_blocks": 1048576, 00:07:30.289 "name": "malloc0" 00:07:30.289 }, 00:07:30.289 "method": "bdev_malloc_create" 00:07:30.289 }, 00:07:30.289 { 00:07:30.289 "params": { 00:07:30.289 "block_size": 512, 00:07:30.289 "num_blocks": 1048576, 00:07:30.289 "name": "malloc1" 00:07:30.289 }, 00:07:30.289 "method": "bdev_malloc_create" 00:07:30.289 }, 00:07:30.289 { 00:07:30.289 "method": "bdev_wait_for_examine" 00:07:30.289 } 00:07:30.289 ] 00:07:30.289 } 00:07:30.289 ] 00:07:30.289 } 00:07:30.289 [2024-07-15 22:33:48.016114] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.547 [2024-07-15 22:33:48.127194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.547 [2024-07-15 22:33:48.179045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.990  Copying: 208/512 [MB] (208 MBps) Copying: 414/512 [MB] (206 MBps) Copying: 512/512 [MB] (average 207 MBps) 00:07:33.990 00:07:33.990 22:33:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:33.990 22:33:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:33.990 22:33:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:33.990 22:33:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:33.990 [2024-07-15 22:33:51.619720] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:33.990 [2024-07-15 22:33:51.620189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:07:33.990 { 00:07:33.990 "subsystems": [ 00:07:33.990 { 00:07:33.990 "subsystem": "bdev", 00:07:33.990 "config": [ 00:07:33.990 { 00:07:33.990 "params": { 00:07:33.990 "block_size": 512, 00:07:33.990 "num_blocks": 1048576, 00:07:33.990 "name": "malloc0" 00:07:33.990 }, 00:07:33.990 "method": "bdev_malloc_create" 00:07:33.990 }, 00:07:33.990 { 00:07:33.990 "params": { 00:07:33.990 "block_size": 512, 00:07:33.990 "num_blocks": 1048576, 00:07:33.990 "name": "malloc1" 00:07:33.990 }, 00:07:33.990 "method": "bdev_malloc_create" 00:07:33.990 }, 00:07:33.990 { 00:07:33.990 "method": "bdev_wait_for_examine" 00:07:33.990 } 00:07:33.990 ] 00:07:33.990 } 00:07:33.990 ] 00:07:33.990 } 00:07:33.990 [2024-07-15 22:33:51.759469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.249 [2024-07-15 22:33:51.874715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.249 [2024-07-15 22:33:51.928042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.800  Copying: 204/512 [MB] (204 MBps) Copying: 412/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:07:37.801 00:07:37.801 ************************************ 00:07:37.801 END TEST dd_malloc_copy 00:07:37.801 ************************************ 00:07:37.801 00:07:37.801 real 0m7.524s 00:07:37.801 user 0m6.545s 00:07:37.801 sys 0m0.822s 00:07:37.801 22:33:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.801 22:33:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.801 22:33:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.801 00:07:37.801 real 0m7.660s 00:07:37.801 user 0m6.596s 00:07:37.801 sys 0m0.905s 00:07:37.801 ************************************ 00:07:37.801 END TEST spdk_dd_malloc 00:07:37.801 ************************************ 00:07:37.801 22:33:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.801 22:33:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:37.801 22:33:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:37.801 22:33:55 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:37.801 22:33:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.801 22:33:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.801 22:33:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:37.801 ************************************ 00:07:37.801 START TEST spdk_dd_bdev_to_bdev 00:07:37.801 ************************************ 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:37.801 * Looking for test storage... 00:07:37.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:37.801 ************************************ 00:07:37.801 START TEST dd_inflate_file 00:07:37.801 ************************************ 00:07:37.801 22:33:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:37.801 [2024-07-15 22:33:55.609676] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:37.801 [2024-07-15 22:33:55.610065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63842 ] 00:07:38.058 [2024-07-15 22:33:55.747584] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.058 [2024-07-15 22:33:55.855520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.316 [2024-07-15 22:33:55.908421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.573  Copying: 64/64 [MB] (average 1560 MBps) 00:07:38.573 00:07:38.573 ************************************ 00:07:38.573 END TEST dd_inflate_file 00:07:38.573 ************************************ 00:07:38.573 00:07:38.573 real 0m0.623s 00:07:38.573 user 0m0.370s 00:07:38.573 sys 0m0.302s 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 ************************************ 00:07:38.573 START TEST dd_copy_to_out_bdev 00:07:38.573 ************************************ 00:07:38.573 22:33:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:38.573 [2024-07-15 22:33:56.288461] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:38.573 [2024-07-15 22:33:56.288720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63870 ] 00:07:38.573 { 00:07:38.573 "subsystems": [ 00:07:38.573 { 00:07:38.573 "subsystem": "bdev", 00:07:38.573 "config": [ 00:07:38.573 { 00:07:38.573 "params": { 00:07:38.573 "trtype": "pcie", 00:07:38.573 "traddr": "0000:00:10.0", 00:07:38.573 "name": "Nvme0" 00:07:38.573 }, 00:07:38.573 "method": "bdev_nvme_attach_controller" 00:07:38.573 }, 00:07:38.573 { 00:07:38.573 "params": { 00:07:38.573 "trtype": "pcie", 00:07:38.573 "traddr": "0000:00:11.0", 00:07:38.573 "name": "Nvme1" 00:07:38.573 }, 00:07:38.573 "method": "bdev_nvme_attach_controller" 00:07:38.573 }, 00:07:38.573 { 00:07:38.573 "method": "bdev_wait_for_examine" 00:07:38.573 } 00:07:38.573 ] 00:07:38.573 } 00:07:38.573 ] 00:07:38.573 } 00:07:38.831 [2024-07-15 22:33:56.422996] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.831 [2024-07-15 22:33:56.494845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.831 [2024-07-15 22:33:56.550711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.462  Copying: 53/64 [MB] (53 MBps) Copying: 64/64 [MB] (average 53 MBps) 00:07:40.462 00:07:40.462 00:07:40.462 real 0m1.959s 00:07:40.462 user 0m1.722s 00:07:40.462 sys 0m1.550s 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.462 ************************************ 00:07:40.462 END TEST dd_copy_to_out_bdev 00:07:40.462 ************************************ 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.462 ************************************ 00:07:40.462 START TEST dd_offset_magic 00:07:40.462 ************************************ 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:40.462 22:33:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:40.781 [2024-07-15 22:33:58.310994] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:40.781 [2024-07-15 22:33:58.311101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63915 ] 00:07:40.781 { 00:07:40.781 "subsystems": [ 00:07:40.781 { 00:07:40.781 "subsystem": "bdev", 00:07:40.781 "config": [ 00:07:40.781 { 00:07:40.781 "params": { 00:07:40.781 "trtype": "pcie", 00:07:40.781 "traddr": "0000:00:10.0", 00:07:40.781 "name": "Nvme0" 00:07:40.781 }, 00:07:40.781 "method": "bdev_nvme_attach_controller" 00:07:40.781 }, 00:07:40.781 { 00:07:40.781 "params": { 00:07:40.781 "trtype": "pcie", 00:07:40.781 "traddr": "0000:00:11.0", 00:07:40.781 "name": "Nvme1" 00:07:40.781 }, 00:07:40.781 "method": "bdev_nvme_attach_controller" 00:07:40.781 }, 00:07:40.781 { 00:07:40.781 "method": "bdev_wait_for_examine" 00:07:40.781 } 00:07:40.781 ] 00:07:40.781 } 00:07:40.781 ] 00:07:40.781 } 00:07:40.781 [2024-07-15 22:33:58.448284] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.781 [2024-07-15 22:33:58.568882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.057 [2024-07-15 22:33:58.622624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.316  Copying: 65/65 [MB] (average 915 MBps) 00:07:41.316 00:07:41.574 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:41.574 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:41.574 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.574 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.574 [2024-07-15 22:33:59.218024] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:41.574 [2024-07-15 22:33:59.218170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63935 ] 00:07:41.574 { 00:07:41.574 "subsystems": [ 00:07:41.574 { 00:07:41.574 "subsystem": "bdev", 00:07:41.574 "config": [ 00:07:41.574 { 00:07:41.574 "params": { 00:07:41.574 "trtype": "pcie", 00:07:41.574 "traddr": "0000:00:10.0", 00:07:41.574 "name": "Nvme0" 00:07:41.574 }, 00:07:41.574 "method": "bdev_nvme_attach_controller" 00:07:41.574 }, 00:07:41.574 { 00:07:41.574 "params": { 00:07:41.574 "trtype": "pcie", 00:07:41.574 "traddr": "0000:00:11.0", 00:07:41.574 "name": "Nvme1" 00:07:41.574 }, 00:07:41.574 "method": "bdev_nvme_attach_controller" 00:07:41.574 }, 00:07:41.574 { 00:07:41.574 "method": "bdev_wait_for_examine" 00:07:41.574 } 00:07:41.574 ] 00:07:41.574 } 00:07:41.574 ] 00:07:41.574 } 00:07:41.574 [2024-07-15 22:33:59.362814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.833 [2024-07-15 22:33:59.478449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.833 [2024-07-15 22:33:59.532134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.351  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:42.351 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:42.351 22:33:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:42.351 [2024-07-15 22:34:00.024747] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:42.351 [2024-07-15 22:34:00.024859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63957 ] 00:07:42.351 { 00:07:42.351 "subsystems": [ 00:07:42.351 { 00:07:42.351 "subsystem": "bdev", 00:07:42.351 "config": [ 00:07:42.351 { 00:07:42.351 "params": { 00:07:42.351 "trtype": "pcie", 00:07:42.351 "traddr": "0000:00:10.0", 00:07:42.351 "name": "Nvme0" 00:07:42.351 }, 00:07:42.351 "method": "bdev_nvme_attach_controller" 00:07:42.351 }, 00:07:42.351 { 00:07:42.351 "params": { 00:07:42.351 "trtype": "pcie", 00:07:42.351 "traddr": "0000:00:11.0", 00:07:42.351 "name": "Nvme1" 00:07:42.351 }, 00:07:42.351 "method": "bdev_nvme_attach_controller" 00:07:42.351 }, 00:07:42.351 { 00:07:42.351 "method": "bdev_wait_for_examine" 00:07:42.351 } 00:07:42.351 ] 00:07:42.351 } 00:07:42.351 ] 00:07:42.351 } 00:07:42.351 [2024-07-15 22:34:00.164030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.610 [2024-07-15 22:34:00.284149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.610 [2024-07-15 22:34:00.339829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.127  Copying: 65/65 [MB] (average 1048 MBps) 00:07:43.127 00:07:43.127 22:34:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:43.127 22:34:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:43.127 22:34:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:43.127 22:34:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:43.127 [2024-07-15 22:34:00.931188] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:43.127 [2024-07-15 22:34:00.931283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63977 ] 00:07:43.127 { 00:07:43.127 "subsystems": [ 00:07:43.127 { 00:07:43.127 "subsystem": "bdev", 00:07:43.127 "config": [ 00:07:43.127 { 00:07:43.127 "params": { 00:07:43.127 "trtype": "pcie", 00:07:43.127 "traddr": "0000:00:10.0", 00:07:43.127 "name": "Nvme0" 00:07:43.127 }, 00:07:43.127 "method": "bdev_nvme_attach_controller" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "params": { 00:07:43.127 "trtype": "pcie", 00:07:43.127 "traddr": "0000:00:11.0", 00:07:43.127 "name": "Nvme1" 00:07:43.127 }, 00:07:43.127 "method": "bdev_nvme_attach_controller" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "method": "bdev_wait_for_examine" 00:07:43.127 } 00:07:43.127 ] 00:07:43.127 } 00:07:43.127 ] 00:07:43.127 } 00:07:43.387 [2024-07-15 22:34:01.073729] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.387 [2024-07-15 22:34:01.185165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.646 [2024-07-15 22:34:01.242464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.905  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:43.905 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:43.905 00:07:43.905 real 0m3.404s 00:07:43.905 user 0m2.517s 00:07:43.905 sys 0m0.976s 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.905 ************************************ 00:07:43.905 END TEST dd_offset_magic 00:07:43.905 ************************************ 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.905 22:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.163 [2024-07-15 22:34:01.750128] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:44.163 [2024-07-15 22:34:01.750224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64014 ] 00:07:44.163 { 00:07:44.163 "subsystems": [ 00:07:44.163 { 00:07:44.163 "subsystem": "bdev", 00:07:44.163 "config": [ 00:07:44.163 { 00:07:44.163 "params": { 00:07:44.163 "trtype": "pcie", 00:07:44.163 "traddr": "0000:00:10.0", 00:07:44.163 "name": "Nvme0" 00:07:44.163 }, 00:07:44.163 "method": "bdev_nvme_attach_controller" 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "params": { 00:07:44.163 "trtype": "pcie", 00:07:44.163 "traddr": "0000:00:11.0", 00:07:44.163 "name": "Nvme1" 00:07:44.163 }, 00:07:44.163 "method": "bdev_nvme_attach_controller" 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "method": "bdev_wait_for_examine" 00:07:44.163 } 00:07:44.163 ] 00:07:44.163 } 00:07:44.163 ] 00:07:44.163 } 00:07:44.163 [2024-07-15 22:34:01.891537] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.422 [2024-07-15 22:34:02.006605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.422 [2024-07-15 22:34:02.064992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.681  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:44.681 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:44.681 22:34:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.940 [2024-07-15 22:34:02.557065] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:44.940 [2024-07-15 22:34:02.557164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64030 ] 00:07:44.940 { 00:07:44.940 "subsystems": [ 00:07:44.940 { 00:07:44.940 "subsystem": "bdev", 00:07:44.940 "config": [ 00:07:44.940 { 00:07:44.940 "params": { 00:07:44.940 "trtype": "pcie", 00:07:44.940 "traddr": "0000:00:10.0", 00:07:44.940 "name": "Nvme0" 00:07:44.940 }, 00:07:44.940 "method": "bdev_nvme_attach_controller" 00:07:44.940 }, 00:07:44.940 { 00:07:44.940 "params": { 00:07:44.940 "trtype": "pcie", 00:07:44.940 "traddr": "0000:00:11.0", 00:07:44.940 "name": "Nvme1" 00:07:44.940 }, 00:07:44.940 "method": "bdev_nvme_attach_controller" 00:07:44.940 }, 00:07:44.940 { 00:07:44.940 "method": "bdev_wait_for_examine" 00:07:44.940 } 00:07:44.940 ] 00:07:44.940 } 00:07:44.940 ] 00:07:44.940 } 00:07:44.940 [2024-07-15 22:34:02.696030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.199 [2024-07-15 22:34:02.792958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.199 [2024-07-15 22:34:02.849281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.457  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:45.457 00:07:45.457 22:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:45.457 ************************************ 00:07:45.457 END TEST spdk_dd_bdev_to_bdev 00:07:45.457 ************************************ 00:07:45.457 00:07:45.457 real 0m7.833s 00:07:45.458 user 0m5.817s 00:07:45.458 sys 0m3.575s 00:07:45.458 22:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.458 22:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.716 22:34:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:45.716 22:34:03 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:45.716 22:34:03 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:45.716 22:34:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.716 22:34:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.716 22:34:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:45.716 ************************************ 00:07:45.716 START TEST spdk_dd_uring 00:07:45.716 ************************************ 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:45.716 * Looking for test storage... 00:07:45.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:45.716 ************************************ 00:07:45.716 START TEST dd_uring_copy 00:07:45.716 ************************************ 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=euqx9395zfj8yyo8gl2c16ozrq1okg5m8mwzyxynbuqkjg107ewnn80vu5zjcjbr0c7vh02k3hcuekg6241ydlhjncf5st5frttdgo34dkxcqu0884vguwnk8jykvq7ftc9v8wnwlme13cv1bgscmzqey7w5jq703hx3oq997w15sajlglbzqhvwvcwaj3cwf02siz83l841y28jhj0mbkxyfj34h115e9l8xjkkn3hgttk6zlct57a6zl88aup9uky16ybr7y22xrx66lyqg1q5d6d9w2p9adyxeixiq4xzna06wapkykm7jei4hc9qtrndyeexxct2gvt3fzrf6hqdgddx68c9g0h27nob4n58479yjdwfhaez37a60hclo0covbsaolnykzuu1nspfpnwtr7xg227z6c0rzgpk5w7j0e0h7auauavxycz6faepe44gsg2st6biek6tkwaw7s84r8yj1xi0lmlj34el23455hhajz0b88tf9oenvu5lfrhoaeq4luerdlp50m36pp16v29ok4x49zkagnvjte711m3vdx7v4bcxzar11w62ttyknbjuw1sp0bxxd8akzmkhvzrridalt4vmswtfyt4muy2c0tovyc4nnrfoz2ah4ovrjsr77gm72j7qw8o1n73t0777m7jvbrhi484ka6n7oktua9vlb5qmmdzdy4ieoe85xhxr1m3d81kznjehlaogp0ul3f81bl2v8hutx60flj5z5wb6w0i06eais7vde3yrevis10azipfmqd8yyklu7jsdcvsk5q6xyoaiifjvxu15p4thxttmmo4l3c3danb9181heb8pr2boaqlzhvsqat9a7yklxusuvhtf9kkqzpbo2o9lji251ku8apgeov52jv43lr64kk0k5irvvnh1724v53oowstvukbql3hjetswohwhtit6q8nvb0boj7wpvfjw1eoia6wbnctu0pu0hcfdrfew4mulq3vtpbb29l41te8an7znlozqh0x 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo euqx9395zfj8yyo8gl2c16ozrq1okg5m8mwzyxynbuqkjg107ewnn80vu5zjcjbr0c7vh02k3hcuekg6241ydlhjncf5st5frttdgo34dkxcqu0884vguwnk8jykvq7ftc9v8wnwlme13cv1bgscmzqey7w5jq703hx3oq997w15sajlglbzqhvwvcwaj3cwf02siz83l841y28jhj0mbkxyfj34h115e9l8xjkkn3hgttk6zlct57a6zl88aup9uky16ybr7y22xrx66lyqg1q5d6d9w2p9adyxeixiq4xzna06wapkykm7jei4hc9qtrndyeexxct2gvt3fzrf6hqdgddx68c9g0h27nob4n58479yjdwfhaez37a60hclo0covbsaolnykzuu1nspfpnwtr7xg227z6c0rzgpk5w7j0e0h7auauavxycz6faepe44gsg2st6biek6tkwaw7s84r8yj1xi0lmlj34el23455hhajz0b88tf9oenvu5lfrhoaeq4luerdlp50m36pp16v29ok4x49zkagnvjte711m3vdx7v4bcxzar11w62ttyknbjuw1sp0bxxd8akzmkhvzrridalt4vmswtfyt4muy2c0tovyc4nnrfoz2ah4ovrjsr77gm72j7qw8o1n73t0777m7jvbrhi484ka6n7oktua9vlb5qmmdzdy4ieoe85xhxr1m3d81kznjehlaogp0ul3f81bl2v8hutx60flj5z5wb6w0i06eais7vde3yrevis10azipfmqd8yyklu7jsdcvsk5q6xyoaiifjvxu15p4thxttmmo4l3c3danb9181heb8pr2boaqlzhvsqat9a7yklxusuvhtf9kkqzpbo2o9lji251ku8apgeov52jv43lr64kk0k5irvvnh1724v53oowstvukbql3hjetswohwhtit6q8nvb0boj7wpvfjw1eoia6wbnctu0pu0hcfdrfew4mulq3vtpbb29l41te8an7znlozqh0x 00:07:45.716 22:34:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:45.716 [2024-07-15 22:34:03.540218] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:45.716 [2024-07-15 22:34:03.540518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64094 ] 00:07:46.005 [2024-07-15 22:34:03.685697] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.005 [2024-07-15 22:34:03.798766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.306 [2024-07-15 22:34:03.856193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.443  Copying: 511/511 [MB] (average 939 MBps) 00:07:47.443 00:07:47.443 22:34:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:47.443 22:34:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:47.443 22:34:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:47.443 22:34:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.443 [2024-07-15 22:34:05.096924] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:47.443 [2024-07-15 22:34:05.097023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64121 ] 00:07:47.443 { 00:07:47.443 "subsystems": [ 00:07:47.443 { 00:07:47.443 "subsystem": "bdev", 00:07:47.443 "config": [ 00:07:47.443 { 00:07:47.443 "params": { 00:07:47.443 "block_size": 512, 00:07:47.443 "num_blocks": 1048576, 00:07:47.443 "name": "malloc0" 00:07:47.443 }, 00:07:47.443 "method": "bdev_malloc_create" 00:07:47.443 }, 00:07:47.443 { 00:07:47.443 "params": { 00:07:47.443 "filename": "/dev/zram1", 00:07:47.443 "name": "uring0" 00:07:47.443 }, 00:07:47.443 "method": "bdev_uring_create" 00:07:47.443 }, 00:07:47.443 { 00:07:47.443 "method": "bdev_wait_for_examine" 00:07:47.443 } 00:07:47.443 ] 00:07:47.443 } 00:07:47.443 ] 00:07:47.443 } 00:07:47.443 [2024-07-15 22:34:05.234977] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.702 [2024-07-15 22:34:05.330431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.702 [2024-07-15 22:34:05.384070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.531  Copying: 210/512 [MB] (210 MBps) Copying: 451/512 [MB] (241 MBps) Copying: 512/512 [MB] (average 226 MBps) 00:07:50.531 00:07:50.531 22:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:50.531 22:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:50.531 22:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:50.531 22:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:50.531 { 00:07:50.531 "subsystems": [ 00:07:50.531 { 00:07:50.531 "subsystem": "bdev", 00:07:50.531 "config": [ 00:07:50.531 { 00:07:50.531 "params": { 00:07:50.531 "block_size": 512, 00:07:50.531 "num_blocks": 1048576, 00:07:50.531 "name": "malloc0" 00:07:50.531 }, 00:07:50.531 "method": "bdev_malloc_create" 00:07:50.531 }, 00:07:50.531 { 00:07:50.531 "params": { 00:07:50.531 "filename": "/dev/zram1", 00:07:50.531 "name": "uring0" 00:07:50.531 }, 00:07:50.531 "method": "bdev_uring_create" 00:07:50.531 }, 00:07:50.531 { 00:07:50.531 "method": "bdev_wait_for_examine" 00:07:50.531 } 00:07:50.531 ] 00:07:50.531 } 00:07:50.531 ] 00:07:50.531 } 00:07:50.531 [2024-07-15 22:34:08.315328] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:50.531 [2024-07-15 22:34:08.315757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64165 ] 00:07:50.789 [2024-07-15 22:34:08.450805] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.789 [2024-07-15 22:34:08.520061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.789 [2024-07-15 22:34:08.576318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.581  Copying: 156/512 [MB] (156 MBps) Copying: 317/512 [MB] (160 MBps) Copying: 492/512 [MB] (175 MBps) Copying: 512/512 [MB] (average 163 MBps) 00:07:54.581 00:07:54.581 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:54.581 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ euqx9395zfj8yyo8gl2c16ozrq1okg5m8mwzyxynbuqkjg107ewnn80vu5zjcjbr0c7vh02k3hcuekg6241ydlhjncf5st5frttdgo34dkxcqu0884vguwnk8jykvq7ftc9v8wnwlme13cv1bgscmzqey7w5jq703hx3oq997w15sajlglbzqhvwvcwaj3cwf02siz83l841y28jhj0mbkxyfj34h115e9l8xjkkn3hgttk6zlct57a6zl88aup9uky16ybr7y22xrx66lyqg1q5d6d9w2p9adyxeixiq4xzna06wapkykm7jei4hc9qtrndyeexxct2gvt3fzrf6hqdgddx68c9g0h27nob4n58479yjdwfhaez37a60hclo0covbsaolnykzuu1nspfpnwtr7xg227z6c0rzgpk5w7j0e0h7auauavxycz6faepe44gsg2st6biek6tkwaw7s84r8yj1xi0lmlj34el23455hhajz0b88tf9oenvu5lfrhoaeq4luerdlp50m36pp16v29ok4x49zkagnvjte711m3vdx7v4bcxzar11w62ttyknbjuw1sp0bxxd8akzmkhvzrridalt4vmswtfyt4muy2c0tovyc4nnrfoz2ah4ovrjsr77gm72j7qw8o1n73t0777m7jvbrhi484ka6n7oktua9vlb5qmmdzdy4ieoe85xhxr1m3d81kznjehlaogp0ul3f81bl2v8hutx60flj5z5wb6w0i06eais7vde3yrevis10azipfmqd8yyklu7jsdcvsk5q6xyoaiifjvxu15p4thxttmmo4l3c3danb9181heb8pr2boaqlzhvsqat9a7yklxusuvhtf9kkqzpbo2o9lji251ku8apgeov52jv43lr64kk0k5irvvnh1724v53oowstvukbql3hjetswohwhtit6q8nvb0boj7wpvfjw1eoia6wbnctu0pu0hcfdrfew4mulq3vtpbb29l41te8an7znlozqh0x == \e\u\q\x\9\3\9\5\z\f\j\8\y\y\o\8\g\l\2\c\1\6\o\z\r\q\1\o\k\g\5\m\8\m\w\z\y\x\y\n\b\u\q\k\j\g\1\0\7\e\w\n\n\8\0\v\u\5\z\j\c\j\b\r\0\c\7\v\h\0\2\k\3\h\c\u\e\k\g\6\2\4\1\y\d\l\h\j\n\c\f\5\s\t\5\f\r\t\t\d\g\o\3\4\d\k\x\c\q\u\0\8\8\4\v\g\u\w\n\k\8\j\y\k\v\q\7\f\t\c\9\v\8\w\n\w\l\m\e\1\3\c\v\1\b\g\s\c\m\z\q\e\y\7\w\5\j\q\7\0\3\h\x\3\o\q\9\9\7\w\1\5\s\a\j\l\g\l\b\z\q\h\v\w\v\c\w\a\j\3\c\w\f\0\2\s\i\z\8\3\l\8\4\1\y\2\8\j\h\j\0\m\b\k\x\y\f\j\3\4\h\1\1\5\e\9\l\8\x\j\k\k\n\3\h\g\t\t\k\6\z\l\c\t\5\7\a\6\z\l\8\8\a\u\p\9\u\k\y\1\6\y\b\r\7\y\2\2\x\r\x\6\6\l\y\q\g\1\q\5\d\6\d\9\w\2\p\9\a\d\y\x\e\i\x\i\q\4\x\z\n\a\0\6\w\a\p\k\y\k\m\7\j\e\i\4\h\c\9\q\t\r\n\d\y\e\e\x\x\c\t\2\g\v\t\3\f\z\r\f\6\h\q\d\g\d\d\x\6\8\c\9\g\0\h\2\7\n\o\b\4\n\5\8\4\7\9\y\j\d\w\f\h\a\e\z\3\7\a\6\0\h\c\l\o\0\c\o\v\b\s\a\o\l\n\y\k\z\u\u\1\n\s\p\f\p\n\w\t\r\7\x\g\2\2\7\z\6\c\0\r\z\g\p\k\5\w\7\j\0\e\0\h\7\a\u\a\u\a\v\x\y\c\z\6\f\a\e\p\e\4\4\g\s\g\2\s\t\6\b\i\e\k\6\t\k\w\a\w\7\s\8\4\r\8\y\j\1\x\i\0\l\m\l\j\3\4\e\l\2\3\4\5\5\h\h\a\j\z\0\b\8\8\t\f\9\o\e\n\v\u\5\l\f\r\h\o\a\e\q\4\l\u\e\r\d\l\p\5\0\m\3\6\p\p\1\6\v\2\9\o\k\4\x\4\9\z\k\a\g\n\v\j\t\e\7\1\1\m\3\v\d\x\7\v\4\b\c\x\z\a\r\1\1\w\6\2\t\t\y\k\n\b\j\u\w\1\s\p\0\b\x\x\d\8\a\k\z\m\k\h\v\z\r\r\i\d\a\l\t\4\v\m\s\w\t\f\y\t\4\m\u\y\2\c\0\t\o\v\y\c\4\n\n\r\f\o\z\2\a\h\4\o\v\r\j\s\r\7\7\g\m\7\2\j\7\q\w\8\o\1\n\7\3\t\0\7\7\7\m\7\j\v\b\r\h\i\4\8\4\k\a\6\n\7\o\k\t\u\a\9\v\l\b\5\q\m\m\d\z\d\y\4\i\e\o\e\8\5\x\h\x\r\1\m\3\d\8\1\k\z\n\j\e\h\l\a\o\g\p\0\u\l\3\f\8\1\b\l\2\v\8\h\u\t\x\6\0\f\l\j\5\z\5\w\b\6\w\0\i\0\6\e\a\i\s\7\v\d\e\3\y\r\e\v\i\s\1\0\a\z\i\p\f\m\q\d\8\y\y\k\l\u\7\j\s\d\c\v\s\k\5\q\6\x\y\o\a\i\i\f\j\v\x\u\1\5\p\4\t\h\x\t\t\m\m\o\4\l\3\c\3\d\a\n\b\9\1\8\1\h\e\b\8\p\r\2\b\o\a\q\l\z\h\v\s\q\a\t\9\a\7\y\k\l\x\u\s\u\v\h\t\f\9\k\k\q\z\p\b\o\2\o\9\l\j\i\2\5\1\k\u\8\a\p\g\e\o\v\5\2\j\v\4\3\l\r\6\4\k\k\0\k\5\i\r\v\v\n\h\1\7\2\4\v\5\3\o\o\w\s\t\v\u\k\b\q\l\3\h\j\e\t\s\w\o\h\w\h\t\i\t\6\q\8\n\v\b\0\b\o\j\7\w\p\v\f\j\w\1\e\o\i\a\6\w\b\n\c\t\u\0\p\u\0\h\c\f\d\r\f\e\w\4\m\u\l\q\3\v\t\p\b\b\2\9\l\4\1\t\e\8\a\n\7\z\n\l\o\z\q\h\0\x ]] 00:07:54.581 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:54.582 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ euqx9395zfj8yyo8gl2c16ozrq1okg5m8mwzyxynbuqkjg107ewnn80vu5zjcjbr0c7vh02k3hcuekg6241ydlhjncf5st5frttdgo34dkxcqu0884vguwnk8jykvq7ftc9v8wnwlme13cv1bgscmzqey7w5jq703hx3oq997w15sajlglbzqhvwvcwaj3cwf02siz83l841y28jhj0mbkxyfj34h115e9l8xjkkn3hgttk6zlct57a6zl88aup9uky16ybr7y22xrx66lyqg1q5d6d9w2p9adyxeixiq4xzna06wapkykm7jei4hc9qtrndyeexxct2gvt3fzrf6hqdgddx68c9g0h27nob4n58479yjdwfhaez37a60hclo0covbsaolnykzuu1nspfpnwtr7xg227z6c0rzgpk5w7j0e0h7auauavxycz6faepe44gsg2st6biek6tkwaw7s84r8yj1xi0lmlj34el23455hhajz0b88tf9oenvu5lfrhoaeq4luerdlp50m36pp16v29ok4x49zkagnvjte711m3vdx7v4bcxzar11w62ttyknbjuw1sp0bxxd8akzmkhvzrridalt4vmswtfyt4muy2c0tovyc4nnrfoz2ah4ovrjsr77gm72j7qw8o1n73t0777m7jvbrhi484ka6n7oktua9vlb5qmmdzdy4ieoe85xhxr1m3d81kznjehlaogp0ul3f81bl2v8hutx60flj5z5wb6w0i06eais7vde3yrevis10azipfmqd8yyklu7jsdcvsk5q6xyoaiifjvxu15p4thxttmmo4l3c3danb9181heb8pr2boaqlzhvsqat9a7yklxusuvhtf9kkqzpbo2o9lji251ku8apgeov52jv43lr64kk0k5irvvnh1724v53oowstvukbql3hjetswohwhtit6q8nvb0boj7wpvfjw1eoia6wbnctu0pu0hcfdrfew4mulq3vtpbb29l41te8an7znlozqh0x == \e\u\q\x\9\3\9\5\z\f\j\8\y\y\o\8\g\l\2\c\1\6\o\z\r\q\1\o\k\g\5\m\8\m\w\z\y\x\y\n\b\u\q\k\j\g\1\0\7\e\w\n\n\8\0\v\u\5\z\j\c\j\b\r\0\c\7\v\h\0\2\k\3\h\c\u\e\k\g\6\2\4\1\y\d\l\h\j\n\c\f\5\s\t\5\f\r\t\t\d\g\o\3\4\d\k\x\c\q\u\0\8\8\4\v\g\u\w\n\k\8\j\y\k\v\q\7\f\t\c\9\v\8\w\n\w\l\m\e\1\3\c\v\1\b\g\s\c\m\z\q\e\y\7\w\5\j\q\7\0\3\h\x\3\o\q\9\9\7\w\1\5\s\a\j\l\g\l\b\z\q\h\v\w\v\c\w\a\j\3\c\w\f\0\2\s\i\z\8\3\l\8\4\1\y\2\8\j\h\j\0\m\b\k\x\y\f\j\3\4\h\1\1\5\e\9\l\8\x\j\k\k\n\3\h\g\t\t\k\6\z\l\c\t\5\7\a\6\z\l\8\8\a\u\p\9\u\k\y\1\6\y\b\r\7\y\2\2\x\r\x\6\6\l\y\q\g\1\q\5\d\6\d\9\w\2\p\9\a\d\y\x\e\i\x\i\q\4\x\z\n\a\0\6\w\a\p\k\y\k\m\7\j\e\i\4\h\c\9\q\t\r\n\d\y\e\e\x\x\c\t\2\g\v\t\3\f\z\r\f\6\h\q\d\g\d\d\x\6\8\c\9\g\0\h\2\7\n\o\b\4\n\5\8\4\7\9\y\j\d\w\f\h\a\e\z\3\7\a\6\0\h\c\l\o\0\c\o\v\b\s\a\o\l\n\y\k\z\u\u\1\n\s\p\f\p\n\w\t\r\7\x\g\2\2\7\z\6\c\0\r\z\g\p\k\5\w\7\j\0\e\0\h\7\a\u\a\u\a\v\x\y\c\z\6\f\a\e\p\e\4\4\g\s\g\2\s\t\6\b\i\e\k\6\t\k\w\a\w\7\s\8\4\r\8\y\j\1\x\i\0\l\m\l\j\3\4\e\l\2\3\4\5\5\h\h\a\j\z\0\b\8\8\t\f\9\o\e\n\v\u\5\l\f\r\h\o\a\e\q\4\l\u\e\r\d\l\p\5\0\m\3\6\p\p\1\6\v\2\9\o\k\4\x\4\9\z\k\a\g\n\v\j\t\e\7\1\1\m\3\v\d\x\7\v\4\b\c\x\z\a\r\1\1\w\6\2\t\t\y\k\n\b\j\u\w\1\s\p\0\b\x\x\d\8\a\k\z\m\k\h\v\z\r\r\i\d\a\l\t\4\v\m\s\w\t\f\y\t\4\m\u\y\2\c\0\t\o\v\y\c\4\n\n\r\f\o\z\2\a\h\4\o\v\r\j\s\r\7\7\g\m\7\2\j\7\q\w\8\o\1\n\7\3\t\0\7\7\7\m\7\j\v\b\r\h\i\4\8\4\k\a\6\n\7\o\k\t\u\a\9\v\l\b\5\q\m\m\d\z\d\y\4\i\e\o\e\8\5\x\h\x\r\1\m\3\d\8\1\k\z\n\j\e\h\l\a\o\g\p\0\u\l\3\f\8\1\b\l\2\v\8\h\u\t\x\6\0\f\l\j\5\z\5\w\b\6\w\0\i\0\6\e\a\i\s\7\v\d\e\3\y\r\e\v\i\s\1\0\a\z\i\p\f\m\q\d\8\y\y\k\l\u\7\j\s\d\c\v\s\k\5\q\6\x\y\o\a\i\i\f\j\v\x\u\1\5\p\4\t\h\x\t\t\m\m\o\4\l\3\c\3\d\a\n\b\9\1\8\1\h\e\b\8\p\r\2\b\o\a\q\l\z\h\v\s\q\a\t\9\a\7\y\k\l\x\u\s\u\v\h\t\f\9\k\k\q\z\p\b\o\2\o\9\l\j\i\2\5\1\k\u\8\a\p\g\e\o\v\5\2\j\v\4\3\l\r\6\4\k\k\0\k\5\i\r\v\v\n\h\1\7\2\4\v\5\3\o\o\w\s\t\v\u\k\b\q\l\3\h\j\e\t\s\w\o\h\w\h\t\i\t\6\q\8\n\v\b\0\b\o\j\7\w\p\v\f\j\w\1\e\o\i\a\6\w\b\n\c\t\u\0\p\u\0\h\c\f\d\r\f\e\w\4\m\u\l\q\3\v\t\p\b\b\2\9\l\4\1\t\e\8\a\n\7\z\n\l\o\z\q\h\0\x ]] 00:07:54.582 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:55.150 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:55.150 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:55.150 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:55.150 22:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.150 [2024-07-15 22:34:12.828698] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:55.150 [2024-07-15 22:34:12.828801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64236 ] 00:07:55.150 { 00:07:55.150 "subsystems": [ 00:07:55.150 { 00:07:55.150 "subsystem": "bdev", 00:07:55.150 "config": [ 00:07:55.150 { 00:07:55.150 "params": { 00:07:55.150 "block_size": 512, 00:07:55.150 "num_blocks": 1048576, 00:07:55.150 "name": "malloc0" 00:07:55.150 }, 00:07:55.150 "method": "bdev_malloc_create" 00:07:55.150 }, 00:07:55.150 { 00:07:55.150 "params": { 00:07:55.150 "filename": "/dev/zram1", 00:07:55.150 "name": "uring0" 00:07:55.150 }, 00:07:55.150 "method": "bdev_uring_create" 00:07:55.150 }, 00:07:55.150 { 00:07:55.150 "method": "bdev_wait_for_examine" 00:07:55.150 } 00:07:55.150 ] 00:07:55.150 } 00:07:55.150 ] 00:07:55.150 } 00:07:55.150 [2024-07-15 22:34:12.967023] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.408 [2024-07-15 22:34:13.101675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.408 [2024-07-15 22:34:13.178318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.143  Copying: 132/512 [MB] (132 MBps) Copying: 269/512 [MB] (137 MBps) Copying: 402/512 [MB] (132 MBps) Copying: 512/512 [MB] (average 132 MBps) 00:08:00.143 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:00.143 22:34:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.143 [2024-07-15 22:34:17.765721] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:00.143 [2024-07-15 22:34:17.765792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64315 ] 00:08:00.143 { 00:08:00.143 "subsystems": [ 00:08:00.143 { 00:08:00.143 "subsystem": "bdev", 00:08:00.143 "config": [ 00:08:00.143 { 00:08:00.143 "params": { 00:08:00.143 "block_size": 512, 00:08:00.143 "num_blocks": 1048576, 00:08:00.143 "name": "malloc0" 00:08:00.143 }, 00:08:00.143 "method": "bdev_malloc_create" 00:08:00.143 }, 00:08:00.143 { 00:08:00.143 "params": { 00:08:00.143 "filename": "/dev/zram1", 00:08:00.143 "name": "uring0" 00:08:00.143 }, 00:08:00.143 "method": "bdev_uring_create" 00:08:00.143 }, 00:08:00.143 { 00:08:00.143 "params": { 00:08:00.143 "name": "uring0" 00:08:00.143 }, 00:08:00.143 "method": "bdev_uring_delete" 00:08:00.143 }, 00:08:00.143 { 00:08:00.143 "method": "bdev_wait_for_examine" 00:08:00.143 } 00:08:00.143 ] 00:08:00.143 } 00:08:00.143 ] 00:08:00.143 } 00:08:00.143 [2024-07-15 22:34:17.955654] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.404 [2024-07-15 22:34:18.051817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.404 [2024-07-15 22:34:18.113160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.228  Copying: 0/0 [B] (average 0 Bps) 00:08:01.228 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.228 22:34:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.228 [2024-07-15 22:34:18.849492] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:01.228 [2024-07-15 22:34:18.849577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64344 ] 00:08:01.228 { 00:08:01.228 "subsystems": [ 00:08:01.228 { 00:08:01.228 "subsystem": "bdev", 00:08:01.228 "config": [ 00:08:01.228 { 00:08:01.228 "params": { 00:08:01.228 "block_size": 512, 00:08:01.228 "num_blocks": 1048576, 00:08:01.228 "name": "malloc0" 00:08:01.228 }, 00:08:01.228 "method": "bdev_malloc_create" 00:08:01.228 }, 00:08:01.228 { 00:08:01.228 "params": { 00:08:01.228 "filename": "/dev/zram1", 00:08:01.228 "name": "uring0" 00:08:01.228 }, 00:08:01.228 "method": "bdev_uring_create" 00:08:01.228 }, 00:08:01.228 { 00:08:01.228 "params": { 00:08:01.228 "name": "uring0" 00:08:01.228 }, 00:08:01.228 "method": "bdev_uring_delete" 00:08:01.228 }, 00:08:01.228 { 00:08:01.228 "method": "bdev_wait_for_examine" 00:08:01.228 } 00:08:01.228 ] 00:08:01.228 } 00:08:01.228 ] 00:08:01.228 } 00:08:01.228 [2024-07-15 22:34:18.988036] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.487 [2024-07-15 22:34:19.087319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.487 [2024-07-15 22:34:19.145276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.747 [2024-07-15 22:34:19.365687] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:01.747 [2024-07-15 22:34:19.365743] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:01.747 [2024-07-15 22:34:19.365755] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:01.747 [2024-07-15 22:34:19.365765] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.010 [2024-07-15 22:34:19.709992] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:02.010 22:34:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:02.269 00:08:02.269 real 0m16.594s 00:08:02.269 user 0m11.055s 00:08:02.269 sys 0m13.415s 00:08:02.269 22:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.269 22:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.269 ************************************ 00:08:02.269 END TEST dd_uring_copy 00:08:02.269 ************************************ 00:08:02.269 22:34:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:08:02.269 00:08:02.269 real 0m16.742s 00:08:02.269 user 0m11.115s 00:08:02.269 sys 0m13.497s 00:08:02.269 22:34:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.269 22:34:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:02.269 ************************************ 00:08:02.269 END TEST spdk_dd_uring 00:08:02.269 ************************************ 00:08:02.528 22:34:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:02.528 22:34:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:02.528 22:34:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.528 22:34:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.528 22:34:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.528 ************************************ 00:08:02.528 START TEST spdk_dd_sparse 00:08:02.528 ************************************ 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:02.528 * Looking for test storage... 00:08:02.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:02.528 1+0 records in 00:08:02.528 1+0 records out 00:08:02.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00743284 s, 564 MB/s 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:02.528 1+0 records in 00:08:02.528 1+0 records out 00:08:02.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00415333 s, 1.0 GB/s 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:02.528 1+0 records in 00:08:02.528 1+0 records out 00:08:02.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00731156 s, 574 MB/s 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:02.528 ************************************ 00:08:02.528 START TEST dd_sparse_file_to_file 00:08:02.528 ************************************ 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:02.528 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.528 [2024-07-15 22:34:20.324928] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:02.529 [2024-07-15 22:34:20.325076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64436 ] 00:08:02.529 { 00:08:02.529 "subsystems": [ 00:08:02.529 { 00:08:02.529 "subsystem": "bdev", 00:08:02.529 "config": [ 00:08:02.529 { 00:08:02.529 "params": { 00:08:02.529 "block_size": 4096, 00:08:02.529 "filename": "dd_sparse_aio_disk", 00:08:02.529 "name": "dd_aio" 00:08:02.529 }, 00:08:02.529 "method": "bdev_aio_create" 00:08:02.529 }, 00:08:02.529 { 00:08:02.529 "params": { 00:08:02.529 "lvs_name": "dd_lvstore", 00:08:02.529 "bdev_name": "dd_aio" 00:08:02.529 }, 00:08:02.529 "method": "bdev_lvol_create_lvstore" 00:08:02.529 }, 00:08:02.529 { 00:08:02.529 "method": "bdev_wait_for_examine" 00:08:02.529 } 00:08:02.529 ] 00:08:02.529 } 00:08:02.529 ] 00:08:02.529 } 00:08:02.787 [2024-07-15 22:34:20.471687] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.787 [2024-07-15 22:34:20.583931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.045 [2024-07-15 22:34:20.641479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.303  Copying: 12/36 [MB] (average 923 MBps) 00:08:03.303 00:08:03.303 22:34:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:03.303 00:08:03.303 real 0m0.747s 00:08:03.303 user 0m0.481s 00:08:03.303 sys 0m0.353s 00:08:03.303 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.304 ************************************ 00:08:03.304 END TEST dd_sparse_file_to_file 00:08:03.304 ************************************ 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 ************************************ 00:08:03.304 START TEST dd_sparse_file_to_bdev 00:08:03.304 ************************************ 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:03.304 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 [2024-07-15 22:34:21.129619] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:03.304 [2024-07-15 22:34:21.129725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64484 ] 00:08:03.304 { 00:08:03.304 "subsystems": [ 00:08:03.304 { 00:08:03.304 "subsystem": "bdev", 00:08:03.304 "config": [ 00:08:03.304 { 00:08:03.304 "params": { 00:08:03.304 "block_size": 4096, 00:08:03.304 "filename": "dd_sparse_aio_disk", 00:08:03.304 "name": "dd_aio" 00:08:03.304 }, 00:08:03.304 "method": "bdev_aio_create" 00:08:03.304 }, 00:08:03.304 { 00:08:03.304 "params": { 00:08:03.304 "lvs_name": "dd_lvstore", 00:08:03.304 "lvol_name": "dd_lvol", 00:08:03.304 "size_in_mib": 36, 00:08:03.304 "thin_provision": true 00:08:03.304 }, 00:08:03.304 "method": "bdev_lvol_create" 00:08:03.304 }, 00:08:03.304 { 00:08:03.304 "method": "bdev_wait_for_examine" 00:08:03.304 } 00:08:03.304 ] 00:08:03.304 } 00:08:03.304 ] 00:08:03.304 } 00:08:03.562 [2024-07-15 22:34:21.269147] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.562 [2024-07-15 22:34:21.349014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.821 [2024-07-15 22:34:21.403403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.079  Copying: 12/36 [MB] (average 480 MBps) 00:08:04.079 00:08:04.079 00:08:04.079 real 0m0.661s 00:08:04.079 user 0m0.427s 00:08:04.079 sys 0m0.337s 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.079 ************************************ 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.079 END TEST dd_sparse_file_to_bdev 00:08:04.079 ************************************ 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.079 22:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:04.079 ************************************ 00:08:04.079 START TEST dd_sparse_bdev_to_file 00:08:04.079 ************************************ 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:04.080 22:34:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:04.080 [2024-07-15 22:34:21.848570] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:04.080 [2024-07-15 22:34:21.848705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64511 ] 00:08:04.080 { 00:08:04.080 "subsystems": [ 00:08:04.080 { 00:08:04.080 "subsystem": "bdev", 00:08:04.080 "config": [ 00:08:04.080 { 00:08:04.080 "params": { 00:08:04.080 "block_size": 4096, 00:08:04.080 "filename": "dd_sparse_aio_disk", 00:08:04.080 "name": "dd_aio" 00:08:04.080 }, 00:08:04.080 "method": "bdev_aio_create" 00:08:04.080 }, 00:08:04.080 { 00:08:04.080 "method": "bdev_wait_for_examine" 00:08:04.080 } 00:08:04.080 ] 00:08:04.080 } 00:08:04.080 ] 00:08:04.080 } 00:08:04.338 [2024-07-15 22:34:21.981263] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.338 [2024-07-15 22:34:22.083212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.338 [2024-07-15 22:34:22.137496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.856  Copying: 12/36 [MB] (average 705 MBps) 00:08:04.856 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:04.856 00:08:04.856 real 0m0.684s 00:08:04.856 user 0m0.430s 00:08:04.856 sys 0m0.362s 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.856 ************************************ 00:08:04.856 END TEST dd_sparse_bdev_to_file 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:04.856 ************************************ 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:04.856 22:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:04.856 00:08:04.856 real 0m2.409s 00:08:04.857 user 0m1.420s 00:08:04.857 sys 0m1.267s 00:08:04.857 22:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.857 22:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:04.857 ************************************ 00:08:04.857 END TEST spdk_dd_sparse 00:08:04.857 ************************************ 00:08:04.857 22:34:22 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:04.857 22:34:22 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:04.857 22:34:22 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.857 22:34:22 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.857 22:34:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:04.857 ************************************ 00:08:04.857 START TEST spdk_dd_negative 00:08:04.857 ************************************ 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:04.857 * Looking for test storage... 00:08:04.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.857 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.122 ************************************ 00:08:05.122 START TEST dd_invalid_arguments 00:08:05.122 ************************************ 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.122 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:05.122 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:05.122 00:08:05.122 CPU options: 00:08:05.122 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:05.122 (like [0,1,10]) 00:08:05.122 --lcores lcore to CPU mapping list. The list is in the format: 00:08:05.122 [<,lcores[@CPUs]>...] 00:08:05.123 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:05.123 Within the group, '-' is used for range separator, 00:08:05.123 ',' is used for single number separator. 00:08:05.123 '( )' can be omitted for single element group, 00:08:05.123 '@' can be omitted if cpus and lcores have the same value 00:08:05.123 --disable-cpumask-locks Disable CPU core lock files. 00:08:05.123 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:05.123 pollers in the app support interrupt mode) 00:08:05.123 -p, --main-core main (primary) core for DPDK 00:08:05.123 00:08:05.123 Configuration options: 00:08:05.123 -c, --config, --json JSON config file 00:08:05.123 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:05.123 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:05.123 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:05.123 --rpcs-allowed comma-separated list of permitted RPCS 00:08:05.123 --json-ignore-init-errors don't exit on invalid config entry 00:08:05.123 00:08:05.123 Memory options: 00:08:05.123 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:05.123 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:05.123 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:05.123 -R, --huge-unlink unlink huge files after initialization 00:08:05.123 -n, --mem-channels number of memory channels used for DPDK 00:08:05.123 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:05.123 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:05.123 --no-huge run without using hugepages 00:08:05.123 --enforce-numa enforce NUMA allocations from the correct socket 00:08:05.123 -i, --shm-id shared memory ID (optional) 00:08:05.123 -g, --single-file-segments force creating just one hugetlbfs file 00:08:05.123 00:08:05.123 PCI options: 00:08:05.123 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:05.123 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:05.123 -u, --no-pci disable PCI access 00:08:05.123 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:05.123 00:08:05.123 Log options: 00:08:05.123 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:05.123 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:05.123 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:05.123 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:05.123 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:05.123 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:05.123 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:05.123 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:05.123 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:05.123 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:05.123 virtio_vfio_user, vmd) 00:08:05.123 --silence-noticelog disable notice level logging to stderr 00:08:05.123 00:08:05.123 Trace options: 00:08:05.123 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:05.123 setting 0 to disable trace (default 32768) 00:08:05.123 Tracepoints vary in size and can use more than one trace entry. 00:08:05.123 -e, --tpoint-group [:] 00:08:05.123 group_name - tracep/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:05.123 [2024-07-15 22:34:22.756844] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:05.123 oint group name for spdk trace buffers (bdev, ftl, 00:08:05.123 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:05.123 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:05.123 a tracepoint group. First tpoint inside a group can be enabled by 00:08:05.123 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:05.123 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:05.123 in /include/spdk_internal/trace_defs.h 00:08:05.123 00:08:05.123 Other options: 00:08:05.123 -h, --help show this usage 00:08:05.123 -v, --version print SPDK version 00:08:05.123 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:05.123 --env-context Opaque context for use of the env implementation 00:08:05.123 00:08:05.123 Application specific: 00:08:05.123 [--------- DD Options ---------] 00:08:05.123 --if Input file. Must specify either --if or --ib. 00:08:05.123 --ib Input bdev. Must specifier either --if or --ib 00:08:05.123 --of Output file. Must specify either --of or --ob. 00:08:05.123 --ob Output bdev. Must specify either --of or --ob. 00:08:05.123 --iflag Input file flags. 00:08:05.123 --oflag Output file flags. 00:08:05.123 --bs I/O unit size (default: 4096) 00:08:05.123 --qd Queue depth (default: 2) 00:08:05.123 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:05.123 --skip Skip this many I/O units at start of input. (default: 0) 00:08:05.123 --seek Skip this many I/O units at start of output. (default: 0) 00:08:05.123 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:05.123 --sparse Enable hole skipping in input target 00:08:05.123 Available iflag and oflag values: 00:08:05.123 append - append mode 00:08:05.123 direct - use direct I/O for data 00:08:05.123 directory - fail unless a directory 00:08:05.123 dsync - use synchronized I/O for data 00:08:05.123 noatime - do not update access time 00:08:05.123 noctty - do not assign controlling terminal from file 00:08:05.123 nofollow - do not follow symlinks 00:08:05.123 nonblock - use non-blocking I/O 00:08:05.123 sync - use synchronized I/O for data and metadata 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.123 00:08:05.123 real 0m0.076s 00:08:05.123 user 0m0.035s 00:08:05.123 sys 0m0.039s 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:05.123 ************************************ 00:08:05.123 END TEST dd_invalid_arguments 00:08:05.123 ************************************ 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.123 ************************************ 00:08:05.123 START TEST dd_double_input 00:08:05.123 ************************************ 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:05.123 [2024-07-15 22:34:22.885089] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.123 00:08:05.123 real 0m0.075s 00:08:05.123 user 0m0.048s 00:08:05.123 sys 0m0.026s 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:05.123 ************************************ 00:08:05.123 END TEST dd_double_input 00:08:05.123 ************************************ 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.123 ************************************ 00:08:05.123 START TEST dd_double_output 00:08:05.123 ************************************ 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:05.123 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:05.124 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:05.124 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.383 22:34:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:05.383 [2024-07-15 22:34:23.014444] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.383 00:08:05.383 real 0m0.079s 00:08:05.383 user 0m0.052s 00:08:05.383 sys 0m0.026s 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:05.383 ************************************ 00:08:05.383 END TEST dd_double_output 00:08:05.383 ************************************ 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.383 ************************************ 00:08:05.383 START TEST dd_no_input 00:08:05.383 ************************************ 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:05.383 [2024-07-15 22:34:23.130164] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.383 00:08:05.383 real 0m0.060s 00:08:05.383 user 0m0.035s 00:08:05.383 sys 0m0.024s 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:05.383 ************************************ 00:08:05.383 END TEST dd_no_input 00:08:05.383 ************************************ 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.383 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.383 ************************************ 00:08:05.384 START TEST dd_no_output 00:08:05.384 ************************************ 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.384 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.642 [2024-07-15 22:34:23.258745] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.642 00:08:05.642 real 0m0.078s 00:08:05.642 user 0m0.049s 00:08:05.642 sys 0m0.027s 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:05.642 ************************************ 00:08:05.642 END TEST dd_no_output 00:08:05.642 ************************************ 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.642 ************************************ 00:08:05.642 START TEST dd_wrong_blocksize 00:08:05.642 ************************************ 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:05.642 [2024-07-15 22:34:23.394231] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.642 00:08:05.642 real 0m0.079s 00:08:05.642 user 0m0.050s 00:08:05.642 sys 0m0.027s 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:05.642 ************************************ 00:08:05.642 END TEST dd_wrong_blocksize 00:08:05.642 ************************************ 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.642 ************************************ 00:08:05.642 START TEST dd_smaller_blocksize 00:08:05.642 ************************************ 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.642 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.643 22:34:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.901 [2024-07-15 22:34:23.534573] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:05.901 [2024-07-15 22:34:23.534703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64735 ] 00:08:05.901 [2024-07-15 22:34:23.674198] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.158 [2024-07-15 22:34:23.748020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.158 [2024-07-15 22:34:23.806613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.416 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:06.675 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:06.675 [2024-07-15 22:34:24.361437] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:06.675 [2024-07-15 22:34:24.361545] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.675 [2024-07-15 22:34:24.477876] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.934 00:08:06.934 real 0m1.112s 00:08:06.934 user 0m0.409s 00:08:06.934 sys 0m0.595s 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:06.934 ************************************ 00:08:06.934 END TEST dd_smaller_blocksize 00:08:06.934 ************************************ 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.934 ************************************ 00:08:06.934 START TEST dd_invalid_count 00:08:06.934 ************************************ 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.934 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:06.934 [2024-07-15 22:34:24.698506] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.935 00:08:06.935 real 0m0.076s 00:08:06.935 user 0m0.039s 00:08:06.935 sys 0m0.036s 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:06.935 ************************************ 00:08:06.935 END TEST dd_invalid_count 00:08:06.935 ************************************ 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.935 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.194 ************************************ 00:08:07.194 START TEST dd_invalid_oflag 00:08:07.194 ************************************ 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:07.194 [2024-07-15 22:34:24.826311] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.194 00:08:07.194 real 0m0.075s 00:08:07.194 user 0m0.050s 00:08:07.194 sys 0m0.023s 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:07.194 ************************************ 00:08:07.194 END TEST dd_invalid_oflag 00:08:07.194 ************************************ 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.194 ************************************ 00:08:07.194 START TEST dd_invalid_iflag 00:08:07.194 ************************************ 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.194 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:07.195 [2024-07-15 22:34:24.958799] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.195 00:08:07.195 real 0m0.075s 00:08:07.195 user 0m0.051s 00:08:07.195 sys 0m0.024s 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.195 22:34:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:07.195 ************************************ 00:08:07.195 END TEST dd_invalid_iflag 00:08:07.195 ************************************ 00:08:07.195 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:07.195 22:34:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:07.195 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.195 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.195 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.454 ************************************ 00:08:07.454 START TEST dd_unknown_flag 00:08:07.454 ************************************ 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.454 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:07.454 [2024-07-15 22:34:25.087632] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:07.454 [2024-07-15 22:34:25.087714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64833 ] 00:08:07.454 [2024-07-15 22:34:25.225987] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.713 [2024-07-15 22:34:25.296573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.713 [2024-07-15 22:34:25.347148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.713 [2024-07-15 22:34:25.378052] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:07.713 [2024-07-15 22:34:25.378096] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.713 [2024-07-15 22:34:25.378157] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:07.713 [2024-07-15 22:34:25.378170] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.713 [2024-07-15 22:34:25.378470] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:07.713 [2024-07-15 22:34:25.378499] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.713 [2024-07-15 22:34:25.378543] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:07.713 [2024-07-15 22:34:25.378552] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:07.713 [2024-07-15 22:34:25.485877] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.973 00:08:07.973 real 0m0.546s 00:08:07.973 user 0m0.292s 00:08:07.973 sys 0m0.161s 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.973 ************************************ 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 END TEST dd_unknown_flag 00:08:07.973 ************************************ 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 ************************************ 00:08:07.973 START TEST dd_invalid_json 00:08:07.973 ************************************ 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.973 22:34:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:07.973 [2024-07-15 22:34:25.679539] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:07.973 [2024-07-15 22:34:25.679642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64861 ] 00:08:08.232 [2024-07-15 22:34:25.811274] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.232 [2024-07-15 22:34:25.910645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.232 [2024-07-15 22:34:25.910704] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:08.232 [2024-07-15 22:34:25.910719] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.232 [2024-07-15 22:34:25.910729] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.232 [2024-07-15 22:34:25.910764] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.232 00:08:08.232 real 0m0.393s 00:08:08.232 user 0m0.223s 00:08:08.232 sys 0m0.068s 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:08.232 ************************************ 00:08:08.232 END TEST dd_invalid_json 00:08:08.232 ************************************ 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:08.232 00:08:08.232 real 0m3.470s 00:08:08.232 user 0m1.585s 00:08:08.232 sys 0m1.514s 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.232 22:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.232 ************************************ 00:08:08.232 END TEST spdk_dd_negative 00:08:08.232 ************************************ 00:08:08.492 22:34:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:08.492 00:08:08.492 real 1m25.355s 00:08:08.492 user 0m55.949s 00:08:08.492 sys 0m36.377s 00:08:08.492 22:34:26 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.492 22:34:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:08.492 ************************************ 00:08:08.492 END TEST spdk_dd 00:08:08.492 ************************************ 00:08:08.492 22:34:26 -- common/autotest_common.sh@1142 -- # return 0 00:08:08.492 22:34:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:08.492 22:34:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.492 22:34:26 -- common/autotest_common.sh@10 -- # set +x 00:08:08.492 22:34:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:08.492 22:34:26 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:08.492 22:34:26 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:08.492 22:34:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.492 22:34:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.492 22:34:26 -- common/autotest_common.sh@10 -- # set +x 00:08:08.492 ************************************ 00:08:08.492 START TEST nvmf_tcp 00:08:08.492 ************************************ 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:08.492 * Looking for test storage... 00:08:08.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.492 22:34:26 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.492 22:34:26 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.492 22:34:26 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.492 22:34:26 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.492 22:34:26 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.492 22:34:26 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.492 22:34:26 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:08.492 22:34:26 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:08.492 22:34:26 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.492 22:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.751 ************************************ 00:08:08.751 START TEST nvmf_host_management 00:08:08.751 ************************************ 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.751 * Looking for test storage... 00:08:08.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.751 22:34:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:08.752 Cannot find device "nvmf_init_br" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:08.752 Cannot find device "nvmf_tgt_br" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.752 Cannot find device "nvmf_tgt_br2" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:08.752 Cannot find device "nvmf_init_br" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:08.752 Cannot find device "nvmf_tgt_br" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:08.752 Cannot find device "nvmf_tgt_br2" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:08.752 Cannot find device "nvmf_br" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:08.752 Cannot find device "nvmf_init_if" 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.752 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:08:09.011 00:08:09.011 --- 10.0.0.2 ping statistics --- 00:08:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.011 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:09.011 00:08:09.011 --- 10.0.0.3 ping statistics --- 00:08:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.011 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:08:09.011 00:08:09.011 --- 10.0.0.1 ping statistics --- 00:08:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.011 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.011 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65119 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65119 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65119 ']' 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.270 22:34:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.270 [2024-07-15 22:34:26.922511] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:09.270 [2024-07-15 22:34:26.922606] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.270 [2024-07-15 22:34:27.064018] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.530 [2024-07-15 22:34:27.183623] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.530 [2024-07-15 22:34:27.183690] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.530 [2024-07-15 22:34:27.183710] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.530 [2024-07-15 22:34:27.183720] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.530 [2024-07-15 22:34:27.183729] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.530 [2024-07-15 22:34:27.183909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.530 [2024-07-15 22:34:27.184461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.530 [2024-07-15 22:34:27.184606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:09.530 [2024-07-15 22:34:27.184649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.530 [2024-07-15 22:34:27.245461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.466 22:34:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.466 22:34:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:10.466 22:34:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.466 22:34:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.466 22:34:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 [2024-07-15 22:34:28.016580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 Malloc0 00:08:10.466 [2024-07-15 22:34:28.097345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65179 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65179 /var/tmp/bdevperf.sock 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65179 ']' 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:10.466 { 00:08:10.466 "params": { 00:08:10.466 "name": "Nvme$subsystem", 00:08:10.466 "trtype": "$TEST_TRANSPORT", 00:08:10.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.466 "adrfam": "ipv4", 00:08:10.466 "trsvcid": "$NVMF_PORT", 00:08:10.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.466 "hdgst": ${hdgst:-false}, 00:08:10.466 "ddgst": ${ddgst:-false} 00:08:10.466 }, 00:08:10.466 "method": "bdev_nvme_attach_controller" 00:08:10.466 } 00:08:10.466 EOF 00:08:10.466 )") 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:10.466 22:34:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:10.466 "params": { 00:08:10.467 "name": "Nvme0", 00:08:10.467 "trtype": "tcp", 00:08:10.467 "traddr": "10.0.0.2", 00:08:10.467 "adrfam": "ipv4", 00:08:10.467 "trsvcid": "4420", 00:08:10.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.467 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:10.467 "hdgst": false, 00:08:10.467 "ddgst": false 00:08:10.467 }, 00:08:10.467 "method": "bdev_nvme_attach_controller" 00:08:10.467 }' 00:08:10.467 [2024-07-15 22:34:28.203248] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:10.467 [2024-07-15 22:34:28.203327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65179 ] 00:08:10.726 [2024-07-15 22:34:28.342441] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.726 [2024-07-15 22:34:28.456988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.726 [2024-07-15 22:34:28.528956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.985 Running I/O for 10 seconds... 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:11.554 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.555 22:34:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:11.555 task offset: 0 on job bdev=Nvme0n1 fails 00:08:11.555 00:08:11.555 Latency(us) 00:08:11.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:11.555 Job: Nvme0n1 ended in about 0.70 seconds with error 00:08:11.555 Verification LBA range: start 0x0 length 0x400 00:08:11.555 Nvme0n1 : 0.70 1463.23 91.45 91.45 0.00 40301.74 1906.50 38368.35 00:08:11.555 =================================================================================================================== 00:08:11.555 Total : 1463.23 91.45 91.45 0.00 40301.74 1906.50 38368.35 00:08:11.555 [2024-07-15 22:34:29.350470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.350981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.350990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.555 [2024-07-15 22:34:29.351249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.555 [2024-07-15 22:34:29.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.556 [2024-07-15 22:34:29.351836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.351850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1c0 is same with the state(5) to be set 00:08:11.556 [2024-07-15 22:34:29.351921] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a1a1c0 was disconnected and freed. reset controller. 00:08:11.556 [2024-07-15 22:34:29.352020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.556 [2024-07-15 22:34:29.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.352046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.556 [2024-07-15 22:34:29.352054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.352063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.556 [2024-07-15 22:34:29.352071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.352080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.556 [2024-07-15 22:34:29.352088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.556 [2024-07-15 22:34:29.352096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a11ef0 is same with the state(5) to be set 00:08:11.556 [2024-07-15 22:34:29.353009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:11.556 [2024-07-15 22:34:29.354713] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.556 [2024-07-15 22:34:29.354733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a11ef0 (9): Bad file descriptor 00:08:11.556 [2024-07-15 22:34:29.359973] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65179 00:08:12.934 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65179) - No such process 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.934 { 00:08:12.934 "params": { 00:08:12.934 "name": "Nvme$subsystem", 00:08:12.934 "trtype": "$TEST_TRANSPORT", 00:08:12.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.934 "adrfam": "ipv4", 00:08:12.934 "trsvcid": "$NVMF_PORT", 00:08:12.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.934 "hdgst": ${hdgst:-false}, 00:08:12.934 "ddgst": ${ddgst:-false} 00:08:12.934 }, 00:08:12.934 "method": "bdev_nvme_attach_controller" 00:08:12.934 } 00:08:12.934 EOF 00:08:12.934 )") 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:12.934 22:34:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.934 "params": { 00:08:12.934 "name": "Nvme0", 00:08:12.934 "trtype": "tcp", 00:08:12.934 "traddr": "10.0.0.2", 00:08:12.934 "adrfam": "ipv4", 00:08:12.934 "trsvcid": "4420", 00:08:12.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:12.934 "hdgst": false, 00:08:12.934 "ddgst": false 00:08:12.934 }, 00:08:12.934 "method": "bdev_nvme_attach_controller" 00:08:12.934 }' 00:08:12.934 [2024-07-15 22:34:30.407580] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:12.934 [2024-07-15 22:34:30.407692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65217 ] 00:08:12.934 [2024-07-15 22:34:30.545939] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.934 [2024-07-15 22:34:30.626816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.934 [2024-07-15 22:34:30.686524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.192 Running I/O for 1 seconds... 00:08:14.126 00:08:14.126 Latency(us) 00:08:14.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.126 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.126 Verification LBA range: start 0x0 length 0x400 00:08:14.126 Nvme0n1 : 1.02 1508.65 94.29 0.00 0.00 41706.13 4349.21 39321.60 00:08:14.126 =================================================================================================================== 00:08:14.126 Total : 1508.65 94.29 0.00 0.00 41706.13 4349.21 39321.60 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.384 rmmod nvme_tcp 00:08:14.384 rmmod nvme_fabrics 00:08:14.384 rmmod nvme_keyring 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65119 ']' 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65119 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65119 ']' 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65119 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65119 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65119' 00:08:14.384 killing process with pid 65119 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65119 00:08:14.384 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65119 00:08:14.642 [2024-07-15 22:34:32.395589] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:14.642 00:08:14.642 real 0m6.131s 00:08:14.642 user 0m23.704s 00:08:14.642 sys 0m1.616s 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.642 22:34:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.642 ************************************ 00:08:14.642 END TEST nvmf_host_management 00:08:14.642 ************************************ 00:08:14.900 22:34:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:14.900 22:34:32 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.900 22:34:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.900 22:34:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.900 22:34:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.900 ************************************ 00:08:14.900 START TEST nvmf_lvol 00:08:14.900 ************************************ 00:08:14.900 22:34:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.900 * Looking for test storage... 00:08:14.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.900 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.900 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:14.900 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.900 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:14.901 Cannot find device "nvmf_tgt_br" 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.901 Cannot find device "nvmf_tgt_br2" 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:14.901 Cannot find device "nvmf_tgt_br" 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:14.901 Cannot find device "nvmf_tgt_br2" 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:14.901 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.159 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:15.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:15.160 00:08:15.160 --- 10.0.0.2 ping statistics --- 00:08:15.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.160 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:15.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:15.160 00:08:15.160 --- 10.0.0.3 ping statistics --- 00:08:15.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.160 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:15.160 00:08:15.160 --- 10.0.0.1 ping statistics --- 00:08:15.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.160 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.160 22:34:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65426 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65426 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65426 ']' 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.418 22:34:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.419 [2024-07-15 22:34:33.071753] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:15.419 [2024-07-15 22:34:33.071879] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.419 [2024-07-15 22:34:33.214115] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.677 [2024-07-15 22:34:33.309093] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.677 [2024-07-15 22:34:33.309168] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.677 [2024-07-15 22:34:33.309200] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.677 [2024-07-15 22:34:33.309208] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.677 [2024-07-15 22:34:33.309215] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.677 [2024-07-15 22:34:33.309816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.677 [2024-07-15 22:34:33.309995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.677 [2024-07-15 22:34:33.309999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.677 [2024-07-15 22:34:33.368041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.249 22:34:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.249 22:34:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:16.249 22:34:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.249 22:34:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.249 22:34:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.559 22:34:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.559 22:34:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.559 [2024-07-15 22:34:34.363533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.559 22:34:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.126 22:34:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:17.126 22:34:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.385 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:17.385 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:17.645 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:17.903 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=353073e1-6744-4416-9768-e560893880cd 00:08:17.903 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 353073e1-6744-4416-9768-e560893880cd lvol 20 00:08:18.162 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4a7f7a3f-8bc2-4504-b82f-d2a24d1d9b2b 00:08:18.162 22:34:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.421 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a7f7a3f-8bc2-4504-b82f-d2a24d1d9b2b 00:08:18.680 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:18.939 [2024-07-15 22:34:36.601627] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.939 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.198 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65502 00:08:19.198 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:19.198 22:34:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:20.134 22:34:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4a7f7a3f-8bc2-4504-b82f-d2a24d1d9b2b MY_SNAPSHOT 00:08:20.393 22:34:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5cd48969-c8fc-4e85-884b-f8e720f49fcf 00:08:20.393 22:34:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4a7f7a3f-8bc2-4504-b82f-d2a24d1d9b2b 30 00:08:20.652 22:34:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5cd48969-c8fc-4e85-884b-f8e720f49fcf MY_CLONE 00:08:21.220 22:34:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=35a921a4-bb96-4373-8c29-cf7231216430 00:08:21.220 22:34:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 35a921a4-bb96-4373-8c29-cf7231216430 00:08:21.479 22:34:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65502 00:08:29.634 Initializing NVMe Controllers 00:08:29.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:29.634 Controller IO queue size 128, less than required. 00:08:29.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:29.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:29.635 Initialization complete. Launching workers. 00:08:29.635 ======================================================== 00:08:29.635 Latency(us) 00:08:29.635 Device Information : IOPS MiB/s Average min max 00:08:29.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10583.30 41.34 12103.89 2602.61 99407.09 00:08:29.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10357.10 40.46 12363.81 3081.90 69874.21 00:08:29.635 ======================================================== 00:08:29.635 Total : 20940.40 81.80 12232.45 2602.61 99407.09 00:08:29.635 00:08:29.635 22:34:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.893 22:34:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a7f7a3f-8bc2-4504-b82f-d2a24d1d9b2b 00:08:30.152 22:34:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 353073e1-6744-4416-9768-e560893880cd 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.410 rmmod nvme_tcp 00:08:30.410 rmmod nvme_fabrics 00:08:30.410 rmmod nvme_keyring 00:08:30.410 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65426 ']' 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65426 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65426 ']' 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65426 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65426 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.669 killing process with pid 65426 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65426' 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65426 00:08:30.669 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65426 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.928 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:30.929 00:08:30.929 real 0m16.076s 00:08:30.929 user 1m5.842s 00:08:30.929 sys 0m4.906s 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.929 ************************************ 00:08:30.929 END TEST nvmf_lvol 00:08:30.929 ************************************ 00:08:30.929 22:34:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.929 22:34:48 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.929 22:34:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.929 22:34:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.929 22:34:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.929 ************************************ 00:08:30.929 START TEST nvmf_lvs_grow 00:08:30.929 ************************************ 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.929 * Looking for test storage... 00:08:30.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.929 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.188 Cannot find device "nvmf_tgt_br" 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.188 Cannot find device "nvmf_tgt_br2" 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.188 Cannot find device "nvmf_tgt_br" 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.188 Cannot find device "nvmf_tgt_br2" 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.188 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.189 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.189 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.189 22:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.189 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.189 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.189 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.189 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:08:31.447 00:08:31.447 --- 10.0.0.2 ping statistics --- 00:08:31.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.447 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:31.447 00:08:31.447 --- 10.0.0.3 ping statistics --- 00:08:31.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.447 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:31.447 00:08:31.447 --- 10.0.0.1 ping statistics --- 00:08:31.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.447 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65830 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65830 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65830 ']' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.447 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:31.447 [2024-07-15 22:34:49.173539] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:31.447 [2024-07-15 22:34:49.173634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.705 [2024-07-15 22:34:49.310035] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.705 [2024-07-15 22:34:49.385721] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.705 [2024-07-15 22:34:49.385793] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.705 [2024-07-15 22:34:49.385803] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.705 [2024-07-15 22:34:49.385811] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.705 [2024-07-15 22:34:49.385817] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.705 [2024-07-15 22:34:49.385840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.705 [2024-07-15 22:34:49.440349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.705 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.705 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:31.705 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.705 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.705 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.963 22:34:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.963 22:34:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.221 [2024-07-15 22:34:49.802455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.222 ************************************ 00:08:32.222 START TEST lvs_grow_clean 00:08:32.222 ************************************ 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.222 22:34:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.480 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.480 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.739 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7023f140-c662-4ced-bb46-a1f54226fbea 00:08:32.739 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:32.739 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.998 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.998 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.998 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7023f140-c662-4ced-bb46-a1f54226fbea lvol 150 00:08:33.257 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4377fe35-63ab-475b-9286-bdfa977f4003 00:08:33.257 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.257 22:34:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.516 [2024-07-15 22:34:51.183750] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.516 [2024-07-15 22:34:51.183835] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.516 true 00:08:33.516 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:33.516 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.775 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.775 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.034 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4377fe35-63ab-475b-9286-bdfa977f4003 00:08:34.292 22:34:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.551 [2024-07-15 22:34:52.140309] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.551 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65905 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65905 /var/tmp/bdevperf.sock 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65905 ']' 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.810 22:34:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:34.810 [2024-07-15 22:34:52.513296] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:34.810 [2024-07-15 22:34:52.513402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65905 ] 00:08:35.069 [2024-07-15 22:34:52.650335] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.069 [2024-07-15 22:34:52.751695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.069 [2024-07-15 22:34:52.809452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.635 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.635 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:35.635 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.893 Nvme0n1 00:08:35.893 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:36.151 [ 00:08:36.151 { 00:08:36.151 "name": "Nvme0n1", 00:08:36.151 "aliases": [ 00:08:36.151 "4377fe35-63ab-475b-9286-bdfa977f4003" 00:08:36.151 ], 00:08:36.151 "product_name": "NVMe disk", 00:08:36.151 "block_size": 4096, 00:08:36.151 "num_blocks": 38912, 00:08:36.151 "uuid": "4377fe35-63ab-475b-9286-bdfa977f4003", 00:08:36.151 "assigned_rate_limits": { 00:08:36.151 "rw_ios_per_sec": 0, 00:08:36.151 "rw_mbytes_per_sec": 0, 00:08:36.151 "r_mbytes_per_sec": 0, 00:08:36.151 "w_mbytes_per_sec": 0 00:08:36.151 }, 00:08:36.151 "claimed": false, 00:08:36.151 "zoned": false, 00:08:36.151 "supported_io_types": { 00:08:36.151 "read": true, 00:08:36.151 "write": true, 00:08:36.151 "unmap": true, 00:08:36.151 "flush": true, 00:08:36.151 "reset": true, 00:08:36.151 "nvme_admin": true, 00:08:36.151 "nvme_io": true, 00:08:36.151 "nvme_io_md": false, 00:08:36.151 "write_zeroes": true, 00:08:36.151 "zcopy": false, 00:08:36.151 "get_zone_info": false, 00:08:36.151 "zone_management": false, 00:08:36.151 "zone_append": false, 00:08:36.151 "compare": true, 00:08:36.151 "compare_and_write": true, 00:08:36.151 "abort": true, 00:08:36.151 "seek_hole": false, 00:08:36.151 "seek_data": false, 00:08:36.151 "copy": true, 00:08:36.151 "nvme_iov_md": false 00:08:36.151 }, 00:08:36.151 "memory_domains": [ 00:08:36.151 { 00:08:36.151 "dma_device_id": "system", 00:08:36.151 "dma_device_type": 1 00:08:36.151 } 00:08:36.151 ], 00:08:36.151 "driver_specific": { 00:08:36.151 "nvme": [ 00:08:36.151 { 00:08:36.151 "trid": { 00:08:36.151 "trtype": "TCP", 00:08:36.151 "adrfam": "IPv4", 00:08:36.151 "traddr": "10.0.0.2", 00:08:36.151 "trsvcid": "4420", 00:08:36.151 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:36.151 }, 00:08:36.151 "ctrlr_data": { 00:08:36.151 "cntlid": 1, 00:08:36.151 "vendor_id": "0x8086", 00:08:36.151 "model_number": "SPDK bdev Controller", 00:08:36.151 "serial_number": "SPDK0", 00:08:36.151 "firmware_revision": "24.09", 00:08:36.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.151 "oacs": { 00:08:36.151 "security": 0, 00:08:36.151 "format": 0, 00:08:36.151 "firmware": 0, 00:08:36.151 "ns_manage": 0 00:08:36.151 }, 00:08:36.151 "multi_ctrlr": true, 00:08:36.151 "ana_reporting": false 00:08:36.151 }, 00:08:36.151 "vs": { 00:08:36.151 "nvme_version": "1.3" 00:08:36.151 }, 00:08:36.151 "ns_data": { 00:08:36.151 "id": 1, 00:08:36.151 "can_share": true 00:08:36.151 } 00:08:36.151 } 00:08:36.151 ], 00:08:36.151 "mp_policy": "active_passive" 00:08:36.151 } 00:08:36.151 } 00:08:36.151 ] 00:08:36.151 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65929 00:08:36.151 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.151 22:34:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:36.409 Running I/O for 10 seconds... 00:08:37.345 Latency(us) 00:08:37.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.345 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:37.345 =================================================================================================================== 00:08:37.345 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:37.345 00:08:38.281 22:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:38.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.281 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:38.281 =================================================================================================================== 00:08:38.281 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:38.281 00:08:38.540 true 00:08:38.540 22:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:38.540 22:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:38.799 22:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:38.799 22:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:38.799 22:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65929 00:08:39.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.367 Nvme0n1 : 3.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:39.367 =================================================================================================================== 00:08:39.367 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:39.367 00:08:40.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.304 Nvme0n1 : 4.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:40.304 =================================================================================================================== 00:08:40.304 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:40.304 00:08:41.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.276 Nvme0n1 : 5.00 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:41.276 =================================================================================================================== 00:08:41.276 Total : 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:41.276 00:08:42.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.648 Nvme0n1 : 6.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.648 =================================================================================================================== 00:08:42.648 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.648 00:08:43.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.580 Nvme0n1 : 7.00 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:08:43.580 =================================================================================================================== 00:08:43.580 Total : 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:08:43.580 00:08:44.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.513 Nvme0n1 : 8.00 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:08:44.513 =================================================================================================================== 00:08:44.513 Total : 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:08:44.513 00:08:45.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.488 Nvme0n1 : 9.00 7380.11 28.83 0.00 0.00 0.00 0.00 0.00 00:08:45.488 =================================================================================================================== 00:08:45.488 Total : 7380.11 28.83 0.00 0.00 0.00 0.00 0.00 00:08:45.488 00:08:46.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.422 Nvme0n1 : 10.00 7353.30 28.72 0.00 0.00 0.00 0.00 0.00 00:08:46.422 =================================================================================================================== 00:08:46.422 Total : 7353.30 28.72 0.00 0.00 0.00 0.00 0.00 00:08:46.422 00:08:46.422 00:08:46.422 Latency(us) 00:08:46.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.422 Nvme0n1 : 10.01 7358.36 28.74 0.00 0.00 17390.17 13405.09 39083.29 00:08:46.422 =================================================================================================================== 00:08:46.422 Total : 7358.36 28.74 0.00 0.00 17390.17 13405.09 39083.29 00:08:46.422 0 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65905 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65905 ']' 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65905 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65905 00:08:46.422 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:46.423 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:46.423 killing process with pid 65905 00:08:46.423 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65905' 00:08:46.423 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65905 00:08:46.423 Received shutdown signal, test time was about 10.000000 seconds 00:08:46.423 00:08:46.423 Latency(us) 00:08:46.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.423 =================================================================================================================== 00:08:46.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:46.423 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65905 00:08:46.682 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.941 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:47.200 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:47.200 22:35:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:47.459 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:47.459 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:47.459 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.719 [2024-07-15 22:35:05.486496] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.719 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:47.978 request: 00:08:47.978 { 00:08:47.978 "uuid": "7023f140-c662-4ced-bb46-a1f54226fbea", 00:08:47.978 "method": "bdev_lvol_get_lvstores", 00:08:47.978 "req_id": 1 00:08:47.978 } 00:08:47.978 Got JSON-RPC error response 00:08:47.978 response: 00:08:47.978 { 00:08:47.978 "code": -19, 00:08:47.978 "message": "No such device" 00:08:47.978 } 00:08:47.978 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:47.978 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.978 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:47.978 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.978 22:35:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.237 aio_bdev 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4377fe35-63ab-475b-9286-bdfa977f4003 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4377fe35-63ab-475b-9286-bdfa977f4003 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:48.237 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.496 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4377fe35-63ab-475b-9286-bdfa977f4003 -t 2000 00:08:48.755 [ 00:08:48.755 { 00:08:48.755 "name": "4377fe35-63ab-475b-9286-bdfa977f4003", 00:08:48.755 "aliases": [ 00:08:48.755 "lvs/lvol" 00:08:48.755 ], 00:08:48.755 "product_name": "Logical Volume", 00:08:48.755 "block_size": 4096, 00:08:48.755 "num_blocks": 38912, 00:08:48.755 "uuid": "4377fe35-63ab-475b-9286-bdfa977f4003", 00:08:48.755 "assigned_rate_limits": { 00:08:48.755 "rw_ios_per_sec": 0, 00:08:48.755 "rw_mbytes_per_sec": 0, 00:08:48.755 "r_mbytes_per_sec": 0, 00:08:48.755 "w_mbytes_per_sec": 0 00:08:48.755 }, 00:08:48.755 "claimed": false, 00:08:48.755 "zoned": false, 00:08:48.755 "supported_io_types": { 00:08:48.755 "read": true, 00:08:48.755 "write": true, 00:08:48.755 "unmap": true, 00:08:48.755 "flush": false, 00:08:48.755 "reset": true, 00:08:48.755 "nvme_admin": false, 00:08:48.755 "nvme_io": false, 00:08:48.755 "nvme_io_md": false, 00:08:48.755 "write_zeroes": true, 00:08:48.755 "zcopy": false, 00:08:48.755 "get_zone_info": false, 00:08:48.755 "zone_management": false, 00:08:48.755 "zone_append": false, 00:08:48.755 "compare": false, 00:08:48.755 "compare_and_write": false, 00:08:48.755 "abort": false, 00:08:48.755 "seek_hole": true, 00:08:48.755 "seek_data": true, 00:08:48.755 "copy": false, 00:08:48.755 "nvme_iov_md": false 00:08:48.755 }, 00:08:48.755 "driver_specific": { 00:08:48.755 "lvol": { 00:08:48.755 "lvol_store_uuid": "7023f140-c662-4ced-bb46-a1f54226fbea", 00:08:48.755 "base_bdev": "aio_bdev", 00:08:48.755 "thin_provision": false, 00:08:48.755 "num_allocated_clusters": 38, 00:08:48.755 "snapshot": false, 00:08:48.755 "clone": false, 00:08:48.755 "esnap_clone": false 00:08:48.755 } 00:08:48.755 } 00:08:48.755 } 00:08:48.755 ] 00:08:48.755 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:48.755 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:48.755 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:49.322 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.322 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.322 22:35:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:49.580 22:35:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.580 22:35:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4377fe35-63ab-475b-9286-bdfa977f4003 00:08:49.839 22:35:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7023f140-c662-4ced-bb46-a1f54226fbea 00:08:50.097 22:35:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.356 22:35:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.616 ************************************ 00:08:50.616 END TEST lvs_grow_clean 00:08:50.616 ************************************ 00:08:50.616 00:08:50.616 real 0m18.494s 00:08:50.616 user 0m17.162s 00:08:50.616 sys 0m2.778s 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.616 ************************************ 00:08:50.616 START TEST lvs_grow_dirty 00:08:50.616 ************************************ 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.616 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.874 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.874 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.442 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ac7d265-180e-404e-ad55-1c855f1b5982 00:08:51.442 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:08:51.442 22:35:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.442 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.442 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.442 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ac7d265-180e-404e-ad55-1c855f1b5982 lvol 150 00:08:51.701 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=14d01407-1ebd-4558-895f-1e882e481404 00:08:51.701 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.701 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.960 [2024-07-15 22:35:09.711741] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.960 [2024-07-15 22:35:09.711845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.960 true 00:08:51.960 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:08:51.960 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.218 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.218 22:35:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.477 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14d01407-1ebd-4558-895f-1e882e481404 00:08:52.736 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.994 [2024-07-15 22:35:10.644349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.994 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66177 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66177 /var/tmp/bdevperf.sock 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66177 ']' 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.253 22:35:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.253 [2024-07-15 22:35:10.980278] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:53.253 [2024-07-15 22:35:10.981077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66177 ] 00:08:53.512 [2024-07-15 22:35:11.118243] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.512 [2024-07-15 22:35:11.232285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.512 [2024-07-15 22:35:11.289508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.448 22:35:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.448 22:35:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:54.448 22:35:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.448 Nvme0n1 00:08:54.448 22:35:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.708 [ 00:08:54.708 { 00:08:54.708 "name": "Nvme0n1", 00:08:54.708 "aliases": [ 00:08:54.708 "14d01407-1ebd-4558-895f-1e882e481404" 00:08:54.708 ], 00:08:54.708 "product_name": "NVMe disk", 00:08:54.708 "block_size": 4096, 00:08:54.708 "num_blocks": 38912, 00:08:54.708 "uuid": "14d01407-1ebd-4558-895f-1e882e481404", 00:08:54.708 "assigned_rate_limits": { 00:08:54.708 "rw_ios_per_sec": 0, 00:08:54.708 "rw_mbytes_per_sec": 0, 00:08:54.708 "r_mbytes_per_sec": 0, 00:08:54.708 "w_mbytes_per_sec": 0 00:08:54.708 }, 00:08:54.708 "claimed": false, 00:08:54.708 "zoned": false, 00:08:54.708 "supported_io_types": { 00:08:54.708 "read": true, 00:08:54.708 "write": true, 00:08:54.708 "unmap": true, 00:08:54.708 "flush": true, 00:08:54.708 "reset": true, 00:08:54.708 "nvme_admin": true, 00:08:54.708 "nvme_io": true, 00:08:54.708 "nvme_io_md": false, 00:08:54.708 "write_zeroes": true, 00:08:54.708 "zcopy": false, 00:08:54.708 "get_zone_info": false, 00:08:54.708 "zone_management": false, 00:08:54.708 "zone_append": false, 00:08:54.708 "compare": true, 00:08:54.708 "compare_and_write": true, 00:08:54.708 "abort": true, 00:08:54.708 "seek_hole": false, 00:08:54.708 "seek_data": false, 00:08:54.708 "copy": true, 00:08:54.708 "nvme_iov_md": false 00:08:54.708 }, 00:08:54.708 "memory_domains": [ 00:08:54.708 { 00:08:54.708 "dma_device_id": "system", 00:08:54.708 "dma_device_type": 1 00:08:54.708 } 00:08:54.708 ], 00:08:54.708 "driver_specific": { 00:08:54.708 "nvme": [ 00:08:54.708 { 00:08:54.708 "trid": { 00:08:54.708 "trtype": "TCP", 00:08:54.708 "adrfam": "IPv4", 00:08:54.708 "traddr": "10.0.0.2", 00:08:54.708 "trsvcid": "4420", 00:08:54.708 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.708 }, 00:08:54.708 "ctrlr_data": { 00:08:54.708 "cntlid": 1, 00:08:54.708 "vendor_id": "0x8086", 00:08:54.708 "model_number": "SPDK bdev Controller", 00:08:54.708 "serial_number": "SPDK0", 00:08:54.708 "firmware_revision": "24.09", 00:08:54.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.708 "oacs": { 00:08:54.708 "security": 0, 00:08:54.708 "format": 0, 00:08:54.708 "firmware": 0, 00:08:54.708 "ns_manage": 0 00:08:54.708 }, 00:08:54.708 "multi_ctrlr": true, 00:08:54.708 "ana_reporting": false 00:08:54.708 }, 00:08:54.708 "vs": { 00:08:54.708 "nvme_version": "1.3" 00:08:54.708 }, 00:08:54.708 "ns_data": { 00:08:54.708 "id": 1, 00:08:54.708 "can_share": true 00:08:54.708 } 00:08:54.708 } 00:08:54.708 ], 00:08:54.708 "mp_policy": "active_passive" 00:08:54.708 } 00:08:54.708 } 00:08:54.708 ] 00:08:54.708 22:35:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66201 00:08:54.708 22:35:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.708 22:35:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.967 Running I/O for 10 seconds... 00:08:55.901 Latency(us) 00:08:55.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.901 Nvme0n1 : 1.00 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:08:55.901 =================================================================================================================== 00:08:55.901 Total : 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:08:55.901 00:08:56.867 22:35:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:08:56.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.867 Nvme0n1 : 2.00 8001.00 31.25 0.00 0.00 0.00 0.00 0.00 00:08:56.867 =================================================================================================================== 00:08:56.867 Total : 8001.00 31.25 0.00 0.00 0.00 0.00 0.00 00:08:56.867 00:08:57.128 true 00:08:57.128 22:35:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:08:57.128 22:35:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.385 22:35:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.385 22:35:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.385 22:35:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66201 00:08:57.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.951 Nvme0n1 : 3.00 7916.33 30.92 0.00 0.00 0.00 0.00 0.00 00:08:57.951 =================================================================================================================== 00:08:57.951 Total : 7916.33 30.92 0.00 0.00 0.00 0.00 0.00 00:08:57.951 00:08:58.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.887 Nvme0n1 : 4.00 7842.25 30.63 0.00 0.00 0.00 0.00 0.00 00:08:58.887 =================================================================================================================== 00:08:58.887 Total : 7842.25 30.63 0.00 0.00 0.00 0.00 0.00 00:08:58.887 00:08:59.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.822 Nvme0n1 : 5.00 7772.40 30.36 0.00 0.00 0.00 0.00 0.00 00:08:59.822 =================================================================================================================== 00:08:59.822 Total : 7772.40 30.36 0.00 0.00 0.00 0.00 0.00 00:08:59.822 00:09:01.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.195 Nvme0n1 : 6.00 7725.83 30.18 0.00 0.00 0.00 0.00 0.00 00:09:01.195 =================================================================================================================== 00:09:01.195 Total : 7725.83 30.18 0.00 0.00 0.00 0.00 0.00 00:09:01.195 00:09:02.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.137 Nvme0n1 : 7.00 7697.14 30.07 0.00 0.00 0.00 0.00 0.00 00:09:02.137 =================================================================================================================== 00:09:02.137 Total : 7697.14 30.07 0.00 0.00 0.00 0.00 0.00 00:09:02.137 00:09:03.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.090 Nvme0n1 : 8.00 7504.75 29.32 0.00 0.00 0.00 0.00 0.00 00:09:03.090 =================================================================================================================== 00:09:03.090 Total : 7504.75 29.32 0.00 0.00 0.00 0.00 0.00 00:09:03.090 00:09:04.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.023 Nvme0n1 : 9.00 7461.11 29.14 0.00 0.00 0.00 0.00 0.00 00:09:04.023 =================================================================================================================== 00:09:04.023 Total : 7461.11 29.14 0.00 0.00 0.00 0.00 0.00 00:09:04.023 00:09:04.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.956 Nvme0n1 : 10.00 7400.80 28.91 0.00 0.00 0.00 0.00 0.00 00:09:04.956 =================================================================================================================== 00:09:04.956 Total : 7400.80 28.91 0.00 0.00 0.00 0.00 0.00 00:09:04.956 00:09:04.956 00:09:04.956 Latency(us) 00:09:04.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.956 Nvme0n1 : 10.00 7411.40 28.95 0.00 0.00 17265.29 7626.01 162052.65 00:09:04.956 =================================================================================================================== 00:09:04.956 Total : 7411.40 28.95 0.00 0.00 17265.29 7626.01 162052.65 00:09:04.956 0 00:09:04.956 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66177 00:09:04.956 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66177 ']' 00:09:04.956 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66177 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66177 00:09:04.957 killing process with pid 66177 00:09:04.957 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.957 00:09:04.957 Latency(us) 00:09:04.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.957 =================================================================================================================== 00:09:04.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66177' 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66177 00:09:04.957 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66177 00:09:05.215 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.474 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.755 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:05.755 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65830 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65830 00:09:06.015 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65830 Killed "${NVMF_APP[@]}" "$@" 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66339 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66339 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66339 ']' 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.015 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.015 [2024-07-15 22:35:23.729475] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:06.015 [2024-07-15 22:35:23.729578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.274 [2024-07-15 22:35:23.866237] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.274 [2024-07-15 22:35:23.970284] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.274 [2024-07-15 22:35:23.970332] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.274 [2024-07-15 22:35:23.970342] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.274 [2024-07-15 22:35:23.970350] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.274 [2024-07-15 22:35:23.970359] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.274 [2024-07-15 22:35:23.970383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.274 [2024-07-15 22:35:24.023981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.843 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.843 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:06.843 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.843 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.843 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.101 22:35:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.360 [2024-07-15 22:35:24.973979] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:07.360 [2024-07-15 22:35:24.974376] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:07.360 [2024-07-15 22:35:24.974573] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 14d01407-1ebd-4558-895f-1e882e481404 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=14d01407-1ebd-4558-895f-1e882e481404 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:07.360 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.619 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14d01407-1ebd-4558-895f-1e882e481404 -t 2000 00:09:07.879 [ 00:09:07.879 { 00:09:07.879 "name": "14d01407-1ebd-4558-895f-1e882e481404", 00:09:07.879 "aliases": [ 00:09:07.879 "lvs/lvol" 00:09:07.879 ], 00:09:07.879 "product_name": "Logical Volume", 00:09:07.879 "block_size": 4096, 00:09:07.879 "num_blocks": 38912, 00:09:07.879 "uuid": "14d01407-1ebd-4558-895f-1e882e481404", 00:09:07.879 "assigned_rate_limits": { 00:09:07.879 "rw_ios_per_sec": 0, 00:09:07.879 "rw_mbytes_per_sec": 0, 00:09:07.879 "r_mbytes_per_sec": 0, 00:09:07.879 "w_mbytes_per_sec": 0 00:09:07.879 }, 00:09:07.879 "claimed": false, 00:09:07.879 "zoned": false, 00:09:07.879 "supported_io_types": { 00:09:07.879 "read": true, 00:09:07.879 "write": true, 00:09:07.879 "unmap": true, 00:09:07.879 "flush": false, 00:09:07.879 "reset": true, 00:09:07.879 "nvme_admin": false, 00:09:07.879 "nvme_io": false, 00:09:07.879 "nvme_io_md": false, 00:09:07.879 "write_zeroes": true, 00:09:07.879 "zcopy": false, 00:09:07.879 "get_zone_info": false, 00:09:07.879 "zone_management": false, 00:09:07.879 "zone_append": false, 00:09:07.879 "compare": false, 00:09:07.879 "compare_and_write": false, 00:09:07.879 "abort": false, 00:09:07.879 "seek_hole": true, 00:09:07.879 "seek_data": true, 00:09:07.879 "copy": false, 00:09:07.879 "nvme_iov_md": false 00:09:07.879 }, 00:09:07.879 "driver_specific": { 00:09:07.879 "lvol": { 00:09:07.879 "lvol_store_uuid": "2ac7d265-180e-404e-ad55-1c855f1b5982", 00:09:07.879 "base_bdev": "aio_bdev", 00:09:07.879 "thin_provision": false, 00:09:07.879 "num_allocated_clusters": 38, 00:09:07.879 "snapshot": false, 00:09:07.879 "clone": false, 00:09:07.879 "esnap_clone": false 00:09:07.879 } 00:09:07.879 } 00:09:07.879 } 00:09:07.879 ] 00:09:07.879 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:07.879 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:07.879 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:08.138 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:08.138 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:08.138 22:35:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:08.397 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:08.397 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.655 [2024-07-15 22:35:26.231720] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:08.655 request: 00:09:08.655 { 00:09:08.655 "uuid": "2ac7d265-180e-404e-ad55-1c855f1b5982", 00:09:08.655 "method": "bdev_lvol_get_lvstores", 00:09:08.655 "req_id": 1 00:09:08.655 } 00:09:08.655 Got JSON-RPC error response 00:09:08.655 response: 00:09:08.655 { 00:09:08.655 "code": -19, 00:09:08.655 "message": "No such device" 00:09:08.655 } 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.655 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.913 aio_bdev 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14d01407-1ebd-4558-895f-1e882e481404 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=14d01407-1ebd-4558-895f-1e882e481404 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:08.913 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.197 22:35:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14d01407-1ebd-4558-895f-1e882e481404 -t 2000 00:09:09.508 [ 00:09:09.508 { 00:09:09.508 "name": "14d01407-1ebd-4558-895f-1e882e481404", 00:09:09.508 "aliases": [ 00:09:09.508 "lvs/lvol" 00:09:09.508 ], 00:09:09.508 "product_name": "Logical Volume", 00:09:09.508 "block_size": 4096, 00:09:09.508 "num_blocks": 38912, 00:09:09.508 "uuid": "14d01407-1ebd-4558-895f-1e882e481404", 00:09:09.508 "assigned_rate_limits": { 00:09:09.508 "rw_ios_per_sec": 0, 00:09:09.508 "rw_mbytes_per_sec": 0, 00:09:09.508 "r_mbytes_per_sec": 0, 00:09:09.508 "w_mbytes_per_sec": 0 00:09:09.508 }, 00:09:09.508 "claimed": false, 00:09:09.508 "zoned": false, 00:09:09.508 "supported_io_types": { 00:09:09.508 "read": true, 00:09:09.508 "write": true, 00:09:09.508 "unmap": true, 00:09:09.508 "flush": false, 00:09:09.508 "reset": true, 00:09:09.508 "nvme_admin": false, 00:09:09.508 "nvme_io": false, 00:09:09.508 "nvme_io_md": false, 00:09:09.508 "write_zeroes": true, 00:09:09.508 "zcopy": false, 00:09:09.508 "get_zone_info": false, 00:09:09.508 "zone_management": false, 00:09:09.508 "zone_append": false, 00:09:09.508 "compare": false, 00:09:09.508 "compare_and_write": false, 00:09:09.508 "abort": false, 00:09:09.508 "seek_hole": true, 00:09:09.508 "seek_data": true, 00:09:09.508 "copy": false, 00:09:09.508 "nvme_iov_md": false 00:09:09.508 }, 00:09:09.508 "driver_specific": { 00:09:09.508 "lvol": { 00:09:09.508 "lvol_store_uuid": "2ac7d265-180e-404e-ad55-1c855f1b5982", 00:09:09.508 "base_bdev": "aio_bdev", 00:09:09.508 "thin_provision": false, 00:09:09.508 "num_allocated_clusters": 38, 00:09:09.508 "snapshot": false, 00:09:09.508 "clone": false, 00:09:09.508 "esnap_clone": false 00:09:09.508 } 00:09:09.508 } 00:09:09.508 } 00:09:09.508 ] 00:09:09.508 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:09.508 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:09.508 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.766 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.766 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:09.766 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:10.024 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:10.024 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 14d01407-1ebd-4558-895f-1e882e481404 00:09:10.282 22:35:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ac7d265-180e-404e-ad55-1c855f1b5982 00:09:10.541 22:35:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.800 22:35:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.058 00:09:11.058 real 0m20.407s 00:09:11.058 user 0m42.113s 00:09:11.058 sys 0m8.731s 00:09:11.058 22:35:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.058 ************************************ 00:09:11.059 END TEST lvs_grow_dirty 00:09:11.059 ************************************ 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:11.059 nvmf_trace.0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.059 22:35:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:11.317 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.317 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:11.317 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.317 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.317 rmmod nvme_tcp 00:09:11.317 rmmod nvme_fabrics 00:09:11.576 rmmod nvme_keyring 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66339 ']' 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66339 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66339 ']' 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66339 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66339 00:09:11.576 killing process with pid 66339 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66339' 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66339 00:09:11.576 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66339 00:09:11.836 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.836 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.836 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.836 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.836 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:11.837 ************************************ 00:09:11.837 END TEST nvmf_lvs_grow 00:09:11.837 ************************************ 00:09:11.837 00:09:11.837 real 0m40.827s 00:09:11.837 user 1m5.429s 00:09:11.837 sys 0m12.249s 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.837 22:35:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.837 22:35:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:11.837 22:35:29 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.837 22:35:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:11.837 22:35:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.837 22:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.837 ************************************ 00:09:11.837 START TEST nvmf_bdev_io_wait 00:09:11.837 ************************************ 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.837 * Looking for test storage... 00:09:11.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:11.837 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.838 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.097 Cannot find device "nvmf_tgt_br" 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.097 Cannot find device "nvmf_tgt_br2" 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.097 Cannot find device "nvmf_tgt_br" 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.097 Cannot find device "nvmf_tgt_br2" 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.097 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.357 22:35:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:12.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:12.357 00:09:12.357 --- 10.0.0.2 ping statistics --- 00:09:12.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.357 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:12.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:09:12.357 00:09:12.357 --- 10.0.0.3 ping statistics --- 00:09:12.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.357 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:12.357 00:09:12.357 --- 10.0.0.1 ping statistics --- 00:09:12.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.357 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66652 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66652 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66652 ']' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.357 22:35:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.357 [2024-07-15 22:35:30.101666] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:12.357 [2024-07-15 22:35:30.101745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.616 [2024-07-15 22:35:30.236171] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.616 [2024-07-15 22:35:30.347190] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.616 [2024-07-15 22:35:30.347512] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.616 [2024-07-15 22:35:30.347656] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.616 [2024-07-15 22:35:30.347713] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.616 [2024-07-15 22:35:30.347744] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.616 [2024-07-15 22:35:30.348031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.616 [2024-07-15 22:35:30.348276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.616 [2024-07-15 22:35:30.348183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.616 [2024-07-15 22:35:30.349080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 [2024-07-15 22:35:31.224910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 [2024-07-15 22:35:31.241565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 Malloc0 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.552 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.553 [2024-07-15 22:35:31.304911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66687 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66688 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66691 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.553 { 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme$subsystem", 00:09:13.553 "trtype": "$TEST_TRANSPORT", 00:09:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "$NVMF_PORT", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.553 "hdgst": ${hdgst:-false}, 00:09:13.553 "ddgst": ${ddgst:-false} 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 } 00:09:13.553 EOF 00:09:13.553 )") 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66694 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.553 { 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme$subsystem", 00:09:13.553 "trtype": "$TEST_TRANSPORT", 00:09:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "$NVMF_PORT", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.553 "hdgst": ${hdgst:-false}, 00:09:13.553 "ddgst": ${ddgst:-false} 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 } 00:09:13.553 EOF 00:09:13.553 )") 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.553 { 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme$subsystem", 00:09:13.553 "trtype": "$TEST_TRANSPORT", 00:09:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "$NVMF_PORT", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.553 "hdgst": ${hdgst:-false}, 00:09:13.553 "ddgst": ${ddgst:-false} 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 } 00:09:13.553 EOF 00:09:13.553 )") 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.553 { 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme$subsystem", 00:09:13.553 "trtype": "$TEST_TRANSPORT", 00:09:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "$NVMF_PORT", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.553 "hdgst": ${hdgst:-false}, 00:09:13.553 "ddgst": ${ddgst:-false} 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 } 00:09:13.553 EOF 00:09:13.553 )") 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme1", 00:09:13.553 "trtype": "tcp", 00:09:13.553 "traddr": "10.0.0.2", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "4420", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.553 "hdgst": false, 00:09:13.553 "ddgst": false 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 }' 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme1", 00:09:13.553 "trtype": "tcp", 00:09:13.553 "traddr": "10.0.0.2", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "4420", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.553 "hdgst": false, 00:09:13.553 "ddgst": false 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 }' 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme1", 00:09:13.553 "trtype": "tcp", 00:09:13.553 "traddr": "10.0.0.2", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "4420", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.553 "hdgst": false, 00:09:13.553 "ddgst": false 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 }' 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.553 "params": { 00:09:13.553 "name": "Nvme1", 00:09:13.553 "trtype": "tcp", 00:09:13.553 "traddr": "10.0.0.2", 00:09:13.553 "adrfam": "ipv4", 00:09:13.553 "trsvcid": "4420", 00:09:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.553 "hdgst": false, 00:09:13.553 "ddgst": false 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 }' 00:09:13.553 22:35:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66687 00:09:13.553 [2024-07-15 22:35:31.362248] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:13.553 [2024-07-15 22:35:31.362437] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:13.553 [2024-07-15 22:35:31.369128] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:13.553 [2024-07-15 22:35:31.369361] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:13.553 [2024-07-15 22:35:31.384743] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:13.553 [2024-07-15 22:35:31.385247] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:13.812 [2024-07-15 22:35:31.406841] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:13.812 [2024-07-15 22:35:31.407229] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:13.812 [2024-07-15 22:35:31.597681] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.071 [2024-07-15 22:35:31.706099] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.071 [2024-07-15 22:35:31.718039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.071 [2024-07-15 22:35:31.809180] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.071 [2024-07-15 22:35:31.821573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.071 [2024-07-15 22:35:31.841753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:14.071 [2024-07-15 22:35:31.888837] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.330 [2024-07-15 22:35:31.906958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.330 Running I/O for 1 seconds... 00:09:14.330 [2024-07-15 22:35:31.944145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:14.330 [2024-07-15 22:35:31.995928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:14.330 Running I/O for 1 seconds... 00:09:14.330 [2024-07-15 22:35:32.006567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.330 [2024-07-15 22:35:32.044805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.330 Running I/O for 1 seconds... 00:09:14.588 Running I/O for 1 seconds... 00:09:15.156 00:09:15.156 Latency(us) 00:09:15.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.156 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:15.156 Nvme1n1 : 1.01 7221.23 28.21 0.00 0.00 17624.50 6672.76 23712.12 00:09:15.156 =================================================================================================================== 00:09:15.156 Total : 7221.23 28.21 0.00 0.00 17624.50 6672.76 23712.12 00:09:15.416 00:09:15.416 Latency(us) 00:09:15.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.416 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:15.416 Nvme1n1 : 1.04 3601.18 14.07 0.00 0.00 34905.19 12868.89 62437.93 00:09:15.416 =================================================================================================================== 00:09:15.416 Total : 3601.18 14.07 0.00 0.00 34905.19 12868.89 62437.93 00:09:15.416 00:09:15.416 Latency(us) 00:09:15.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.416 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:15.416 Nvme1n1 : 1.01 4055.57 15.84 0.00 0.00 31435.25 7238.75 79596.45 00:09:15.416 =================================================================================================================== 00:09:15.416 Total : 4055.57 15.84 0.00 0.00 31435.25 7238.75 79596.45 00:09:15.416 00:09:15.416 Latency(us) 00:09:15.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.416 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:15.416 Nvme1n1 : 1.00 181348.54 708.39 0.00 0.00 703.31 318.37 1124.54 00:09:15.416 =================================================================================================================== 00:09:15.416 Total : 181348.54 708.39 0.00 0.00 703.31 318.37 1124.54 00:09:15.673 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66688 00:09:15.673 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66691 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66694 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.931 rmmod nvme_tcp 00:09:15.931 rmmod nvme_fabrics 00:09:15.931 rmmod nvme_keyring 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66652 ']' 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66652 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66652 ']' 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66652 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66652 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:15.931 killing process with pid 66652 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66652' 00:09:15.931 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66652 00:09:15.932 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66652 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:16.190 00:09:16.190 real 0m4.390s 00:09:16.190 user 0m19.555s 00:09:16.190 sys 0m2.244s 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.190 22:35:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.190 ************************************ 00:09:16.190 END TEST nvmf_bdev_io_wait 00:09:16.190 ************************************ 00:09:16.190 22:35:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:16.190 22:35:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.190 22:35:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:16.190 22:35:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.190 22:35:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.190 ************************************ 00:09:16.190 START TEST nvmf_queue_depth 00:09:16.190 ************************************ 00:09:16.190 22:35:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.449 * Looking for test storage... 00:09:16.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.449 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:16.450 Cannot find device "nvmf_tgt_br" 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.450 Cannot find device "nvmf_tgt_br2" 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:16.450 Cannot find device "nvmf_tgt_br" 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:16.450 Cannot find device "nvmf_tgt_br2" 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.450 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:16.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:16.709 00:09:16.709 --- 10.0.0.2 ping statistics --- 00:09:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.709 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:16.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:16.709 00:09:16.709 --- 10.0.0.3 ping statistics --- 00:09:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.709 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:16.709 00:09:16.709 --- 10.0.0.1 ping statistics --- 00:09:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.709 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66933 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66933 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66933 ']' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.709 22:35:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.709 [2024-07-15 22:35:34.489101] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:16.709 [2024-07-15 22:35:34.489169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.968 [2024-07-15 22:35:34.626954] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.968 [2024-07-15 22:35:34.742783] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.968 [2024-07-15 22:35:34.742891] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.968 [2024-07-15 22:35:34.742918] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.968 [2024-07-15 22:35:34.742941] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.968 [2024-07-15 22:35:34.742962] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.968 [2024-07-15 22:35:34.743007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.968 [2024-07-15 22:35:34.801935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 [2024-07-15 22:35:35.478381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 Malloc0 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 [2024-07-15 22:35:35.546354] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66971 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66971 /var/tmp/bdevperf.sock 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66971 ']' 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.905 22:35:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 [2024-07-15 22:35:35.609818] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:17.905 [2024-07-15 22:35:35.609984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66971 ] 00:09:18.164 [2024-07-15 22:35:35.752017] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.164 [2024-07-15 22:35:35.858200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.164 [2024-07-15 22:35:35.915457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.102 NVMe0n1 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.102 22:35:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.102 Running I/O for 10 seconds... 00:09:29.135 00:09:29.135 Latency(us) 00:09:29.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.135 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:29.135 Verification LBA range: start 0x0 length 0x4000 00:09:29.135 NVMe0n1 : 10.07 8789.76 34.34 0.00 0.00 115927.78 15966.95 85792.58 00:09:29.135 =================================================================================================================== 00:09:29.135 Total : 8789.76 34.34 0.00 0.00 115927.78 15966.95 85792.58 00:09:29.135 0 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66971 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66971 ']' 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66971 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66971 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:29.135 killing process with pid 66971 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66971' 00:09:29.135 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.135 00:09:29.135 Latency(us) 00:09:29.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.135 =================================================================================================================== 00:09:29.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66971 00:09:29.135 22:35:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66971 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:29.717 rmmod nvme_tcp 00:09:29.717 rmmod nvme_fabrics 00:09:29.717 rmmod nvme_keyring 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66933 ']' 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66933 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66933 ']' 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66933 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66933 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:29.717 killing process with pid 66933 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66933' 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66933 00:09:29.717 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66933 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:29.976 00:09:29.976 real 0m13.698s 00:09:29.976 user 0m23.747s 00:09:29.976 sys 0m2.235s 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.976 ************************************ 00:09:29.976 END TEST nvmf_queue_depth 00:09:29.976 22:35:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 ************************************ 00:09:29.976 22:35:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.976 22:35:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.976 22:35:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.976 22:35:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.976 22:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 ************************************ 00:09:29.976 START TEST nvmf_target_multipath 00:09:29.976 ************************************ 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.976 * Looking for test storage... 00:09:29.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.976 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.235 22:35:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:30.236 Cannot find device "nvmf_tgt_br" 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.236 Cannot find device "nvmf_tgt_br2" 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:30.236 Cannot find device "nvmf_tgt_br" 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:30.236 Cannot find device "nvmf_tgt_br2" 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.236 22:35:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:30.236 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:30.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:30.496 00:09:30.496 --- 10.0.0.2 ping statistics --- 00:09:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.496 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:30.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:30.496 00:09:30.496 --- 10.0.0.3 ping statistics --- 00:09:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.496 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:30.496 00:09:30.496 --- 10.0.0.1 ping statistics --- 00:09:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.496 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67285 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67285 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67285 ']' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.496 22:35:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 [2024-07-15 22:35:48.246365] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:30.496 [2024-07-15 22:35:48.246461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.755 [2024-07-15 22:35:48.385631] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.755 [2024-07-15 22:35:48.516690] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.755 [2024-07-15 22:35:48.516744] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.755 [2024-07-15 22:35:48.516756] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.755 [2024-07-15 22:35:48.516764] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.755 [2024-07-15 22:35:48.516771] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.755 [2024-07-15 22:35:48.516950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.755 [2024-07-15 22:35:48.517150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.755 [2024-07-15 22:35:48.517968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.755 [2024-07-15 22:35:48.517974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.014 [2024-07-15 22:35:48.595961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.582 22:35:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:31.841 [2024-07-15 22:35:49.613100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.841 22:35:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:32.099 Malloc0 00:09:32.099 22:35:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:32.665 22:35:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.923 22:35:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.923 [2024-07-15 22:35:50.737868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.182 22:35:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.182 [2024-07-15 22:35:50.994315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.441 22:35:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:33.441 22:35:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:33.699 22:35:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.699 22:35:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.699 22:35:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.699 22:35:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.699 22:35:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.613 22:35:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67380 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:35.614 22:35:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:35.614 [global] 00:09:35.614 thread=1 00:09:35.614 invalidate=1 00:09:35.614 rw=randrw 00:09:35.614 time_based=1 00:09:35.614 runtime=6 00:09:35.614 ioengine=libaio 00:09:35.614 direct=1 00:09:35.614 bs=4096 00:09:35.614 iodepth=128 00:09:35.614 norandommap=0 00:09:35.614 numjobs=1 00:09:35.614 00:09:35.614 verify_dump=1 00:09:35.614 verify_backlog=512 00:09:35.614 verify_state_save=0 00:09:35.614 do_verify=1 00:09:35.614 verify=crc32c-intel 00:09:35.614 [job0] 00:09:35.614 filename=/dev/nvme0n1 00:09:35.614 Could not set queue depth (nvme0n1) 00:09:35.876 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.876 fio-3.35 00:09:35.876 Starting 1 thread 00:09:36.810 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:36.810 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.069 22:35:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:37.328 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.610 22:35:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67380 00:09:42.878 00:09:42.878 job0: (groupid=0, jobs=1): err= 0: pid=67401: Mon Jul 15 22:35:59 2024 00:09:42.878 read: IOPS=9480, BW=37.0MiB/s (38.8MB/s)(222MiB/6005msec) 00:09:42.878 slat (usec): min=7, max=6177, avg=61.93, stdev=246.72 00:09:42.878 clat (usec): min=1709, max=20177, avg=9180.81, stdev=1629.42 00:09:42.878 lat (usec): min=1726, max=21070, avg=9242.74, stdev=1636.55 00:09:42.878 clat percentiles (usec): 00:09:42.878 | 1.00th=[ 4752], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8160], 00:09:42.878 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:09:42.878 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[12518], 00:09:42.878 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16581], 99.95th=[18482], 00:09:42.878 | 99.99th=[19792] 00:09:42.878 bw ( KiB/s): min=10680, max=24928, per=51.21%, avg=19422.55, stdev=4770.23, samples=11 00:09:42.878 iops : min= 2670, max= 6232, avg=4855.64, stdev=1192.56, samples=11 00:09:42.878 write: IOPS=5637, BW=22.0MiB/s (23.1MB/s)(117MiB/5294msec); 0 zone resets 00:09:42.878 slat (usec): min=15, max=5649, avg=72.18, stdev=180.78 00:09:42.878 clat (usec): min=2688, max=20003, avg=8006.39, stdev=1536.05 00:09:42.878 lat (usec): min=2714, max=20026, avg=8078.57, stdev=1541.32 00:09:42.878 clat percentiles (usec): 00:09:42.878 | 1.00th=[ 3589], 5.00th=[ 4752], 10.00th=[ 6325], 20.00th=[ 7242], 00:09:42.878 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8356], 00:09:42.878 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[ 9896], 00:09:42.878 | 99.00th=[12387], 99.50th=[13435], 99.90th=[17171], 99.95th=[18744], 00:09:42.878 | 99.99th=[19530] 00:09:42.878 bw ( KiB/s): min=10936, max=24664, per=86.25%, avg=19449.45, stdev=4576.56, samples=11 00:09:42.878 iops : min= 2734, max= 6166, avg=4862.36, stdev=1144.14, samples=11 00:09:42.878 lat (msec) : 2=0.01%, 4=0.95%, 10=84.07%, 20=14.96%, 50=0.01% 00:09:42.878 cpu : usr=5.78%, sys=20.82%, ctx=5003, majf=0, minf=96 00:09:42.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:42.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.878 issued rwts: total=56932,29846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.878 00:09:42.878 Run status group 0 (all jobs): 00:09:42.878 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=222MiB (233MB), run=6005-6005msec 00:09:42.878 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=117MiB (122MB), run=5294-5294msec 00:09:42.878 00:09:42.878 Disk stats (read/write): 00:09:42.878 nvme0n1: ios=56347/29056, merge=0/0, ticks=495886/218899, in_queue=714785, util=98.70% 00:09:42.878 22:35:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:42.878 22:35:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67481 00:09:42.878 22:36:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:42.878 [global] 00:09:42.878 thread=1 00:09:42.878 invalidate=1 00:09:42.878 rw=randrw 00:09:42.878 time_based=1 00:09:42.878 runtime=6 00:09:42.878 ioengine=libaio 00:09:42.878 direct=1 00:09:42.878 bs=4096 00:09:42.878 iodepth=128 00:09:42.878 norandommap=0 00:09:42.878 numjobs=1 00:09:42.878 00:09:42.878 verify_dump=1 00:09:42.878 verify_backlog=512 00:09:42.878 verify_state_save=0 00:09:42.878 do_verify=1 00:09:42.878 verify=crc32c-intel 00:09:42.878 [job0] 00:09:42.879 filename=/dev/nvme0n1 00:09:42.879 Could not set queue depth (nvme0n1) 00:09:42.879 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.879 fio-3.35 00:09:42.879 Starting 1 thread 00:09:43.447 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:43.704 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:43.962 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:43.962 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:43.963 22:36:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:44.220 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.787 22:36:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67481 00:09:48.996 00:09:48.996 job0: (groupid=0, jobs=1): err= 0: pid=67502: Mon Jul 15 22:36:06 2024 00:09:48.996 read: IOPS=9831, BW=38.4MiB/s (40.3MB/s)(231MiB/6007msec) 00:09:48.996 slat (usec): min=6, max=8923, avg=51.73, stdev=225.37 00:09:48.996 clat (usec): min=410, max=20537, avg=9089.87, stdev=2217.50 00:09:48.996 lat (usec): min=426, max=20573, avg=9141.60, stdev=2226.65 00:09:48.996 clat percentiles (usec): 00:09:48.996 | 1.00th=[ 3752], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 7898], 00:09:48.996 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:48.996 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11994], 95.00th=[13435], 00:09:48.996 | 99.00th=[15664], 99.50th=[17433], 99.90th=[19006], 99.95th=[19268], 00:09:48.996 | 99.99th=[19530] 00:09:48.996 bw ( KiB/s): min= 7320, max=27840, per=50.58%, avg=19893.33, stdev=5810.00, samples=12 00:09:48.996 iops : min= 1830, max= 6960, avg=4973.33, stdev=1452.50, samples=12 00:09:48.996 write: IOPS=5511, BW=21.5MiB/s (22.6MB/s)(117MiB/5422msec); 0 zone resets 00:09:48.996 slat (usec): min=11, max=2232, avg=60.73, stdev=155.92 00:09:48.996 clat (usec): min=1170, max=19046, avg=7560.69, stdev=1989.12 00:09:48.996 lat (usec): min=1196, max=19094, avg=7621.42, stdev=2000.23 00:09:48.997 clat percentiles (usec): 00:09:48.997 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 4752], 20.00th=[ 5800], 00:09:48.997 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8160], 00:09:48.997 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10683], 00:09:48.997 | 99.00th=[13173], 99.50th=[14222], 99.90th=[16712], 99.95th=[17171], 00:09:48.997 | 99.99th=[17695] 00:09:48.997 bw ( KiB/s): min= 7720, max=28672, per=90.19%, avg=19884.67, stdev=5673.41, samples=12 00:09:48.997 iops : min= 1930, max= 7168, avg=4971.17, stdev=1418.35, samples=12 00:09:48.997 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:48.997 lat (msec) : 2=0.19%, 4=2.11%, 10=81.07%, 20=16.61%, 50=0.01% 00:09:48.997 cpu : usr=5.24%, sys=21.46%, ctx=5153, majf=0, minf=72 00:09:48.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:48.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.997 issued rwts: total=59060,29883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.997 00:09:48.997 Run status group 0 (all jobs): 00:09:48.997 READ: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=231MiB (242MB), run=6007-6007msec 00:09:48.997 WRITE: bw=21.5MiB/s (22.6MB/s), 21.5MiB/s-21.5MiB/s (22.6MB/s-22.6MB/s), io=117MiB (122MB), run=5422-5422msec 00:09:48.997 00:09:48.997 Disk stats (read/write): 00:09:48.997 nvme0n1: ios=58221/29334, merge=0/0, ticks=508286/208112, in_queue=716398, util=98.66% 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.997 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.325 rmmod nvme_tcp 00:09:49.325 rmmod nvme_fabrics 00:09:49.325 rmmod nvme_keyring 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67285 ']' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67285 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67285 ']' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67285 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67285 00:09:49.325 killing process with pid 67285 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67285' 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67285 00:09:49.325 22:36:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67285 00:09:49.583 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.584 00:09:49.584 real 0m19.531s 00:09:49.584 user 1m14.126s 00:09:49.584 sys 0m8.811s 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.584 22:36:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.584 ************************************ 00:09:49.584 END TEST nvmf_target_multipath 00:09:49.584 ************************************ 00:09:49.584 22:36:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.584 22:36:07 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.584 22:36:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.584 22:36:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.584 22:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.584 ************************************ 00:09:49.584 START TEST nvmf_zcopy 00:09:49.584 ************************************ 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.584 * Looking for test storage... 00:09:49.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.584 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.843 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:49.844 Cannot find device "nvmf_tgt_br" 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.844 Cannot find device "nvmf_tgt_br2" 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:49.844 Cannot find device "nvmf_tgt_br" 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:49.844 Cannot find device "nvmf_tgt_br2" 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:49.844 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:49.845 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:49.845 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:50.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:09:50.104 00:09:50.104 --- 10.0.0.2 ping statistics --- 00:09:50.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.104 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:50.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:50.104 00:09:50.104 --- 10.0.0.3 ping statistics --- 00:09:50.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.104 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:50.104 00:09:50.104 --- 10.0.0.1 ping statistics --- 00:09:50.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.104 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67750 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67750 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67750 ']' 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.104 22:36:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.104 [2024-07-15 22:36:07.870195] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:50.104 [2024-07-15 22:36:07.870335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.363 [2024-07-15 22:36:08.008816] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.363 [2024-07-15 22:36:08.138116] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.363 [2024-07-15 22:36:08.138175] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.363 [2024-07-15 22:36:08.138203] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.363 [2024-07-15 22:36:08.138214] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.363 [2024-07-15 22:36:08.138223] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.363 [2024-07-15 22:36:08.138273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.622 [2024-07-15 22:36:08.197663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.187 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 [2024-07-15 22:36:08.929446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 [2024-07-15 22:36:08.945520] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 malloc0 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:51.188 { 00:09:51.188 "params": { 00:09:51.188 "name": "Nvme$subsystem", 00:09:51.188 "trtype": "$TEST_TRANSPORT", 00:09:51.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.188 "adrfam": "ipv4", 00:09:51.188 "trsvcid": "$NVMF_PORT", 00:09:51.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.188 "hdgst": ${hdgst:-false}, 00:09:51.188 "ddgst": ${ddgst:-false} 00:09:51.188 }, 00:09:51.188 "method": "bdev_nvme_attach_controller" 00:09:51.188 } 00:09:51.188 EOF 00:09:51.188 )") 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:51.188 22:36:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:51.188 "params": { 00:09:51.188 "name": "Nvme1", 00:09:51.188 "trtype": "tcp", 00:09:51.188 "traddr": "10.0.0.2", 00:09:51.188 "adrfam": "ipv4", 00:09:51.188 "trsvcid": "4420", 00:09:51.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.188 "hdgst": false, 00:09:51.188 "ddgst": false 00:09:51.188 }, 00:09:51.188 "method": "bdev_nvme_attach_controller" 00:09:51.188 }' 00:09:51.445 [2024-07-15 22:36:09.044539] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:51.445 [2024-07-15 22:36:09.044644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67783 ] 00:09:51.445 [2024-07-15 22:36:09.187825] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.703 [2024-07-15 22:36:09.322508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.703 [2024-07-15 22:36:09.392113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.703 Running I/O for 10 seconds... 00:10:03.900 00:10:03.900 Latency(us) 00:10:03.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.900 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:03.900 Verification LBA range: start 0x0 length 0x1000 00:10:03.900 Nvme1n1 : 10.01 6064.60 47.38 0.00 0.00 21039.37 2263.97 31695.59 00:10:03.900 =================================================================================================================== 00:10:03.900 Total : 6064.60 47.38 0.00 0.00 21039.37 2263.97 31695.59 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67905 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.900 { 00:10:03.900 "params": { 00:10:03.900 "name": "Nvme$subsystem", 00:10:03.900 "trtype": "$TEST_TRANSPORT", 00:10:03.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.900 "adrfam": "ipv4", 00:10:03.900 "trsvcid": "$NVMF_PORT", 00:10:03.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.900 "hdgst": ${hdgst:-false}, 00:10:03.900 "ddgst": ${ddgst:-false} 00:10:03.900 }, 00:10:03.900 "method": "bdev_nvme_attach_controller" 00:10:03.900 } 00:10:03.900 EOF 00:10:03.900 )") 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:03.900 [2024-07-15 22:36:19.859335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.859401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:03.900 22:36:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.900 "params": { 00:10:03.900 "name": "Nvme1", 00:10:03.900 "trtype": "tcp", 00:10:03.900 "traddr": "10.0.0.2", 00:10:03.900 "adrfam": "ipv4", 00:10:03.900 "trsvcid": "4420", 00:10:03.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.900 "hdgst": false, 00:10:03.900 "ddgst": false 00:10:03.900 }, 00:10:03.900 "method": "bdev_nvme_attach_controller" 00:10:03.900 }' 00:10:03.900 [2024-07-15 22:36:19.871305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.871363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.883316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.883373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.895297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.895339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.899952] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:03.900 [2024-07-15 22:36:19.900040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67905 ] 00:10:03.900 [2024-07-15 22:36:19.907310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.907366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.919307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.919348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.931310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.931360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.943317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.943362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.955320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.955364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.967320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.967366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.979323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.979380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:19.991330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:19.991376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.003357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.003416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.015332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.015376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.027336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.027380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.035195] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.900 [2024-07-15 22:36:20.039344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.039401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.051346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.051404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.063352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.063411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.075350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.075407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.087355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.087413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.099357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.099414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.111360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.111419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.123367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.123418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.135362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.135418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.147372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.147441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.159425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.159500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.171390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.171433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.900 [2024-07-15 22:36:20.178674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.900 [2024-07-15 22:36:20.183375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.900 [2024-07-15 22:36:20.183441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.195381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.195425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.207384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.207428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.219398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.219451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.231397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.231455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.243398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.243440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.255419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.255460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.267377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:03.901 [2024-07-15 22:36:20.267404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.267447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.279415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.279454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.291420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.291505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.303414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.303466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.315510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.315562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.327450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.327518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.339451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.339504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.351528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.351596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.363492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.363551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.375506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.375562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.387562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.387601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 Running I/O for 5 seconds... 00:10:03.901 [2024-07-15 22:36:20.399547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.399619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.418556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.418639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.433041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.433112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.448039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.448111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.464454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.464508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.480356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.480413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.499059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.499112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.513057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.513146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.528479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.528539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.539260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.539326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.555301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.555359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.571390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.571465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.589356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.589430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.604578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.604631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.620306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.620375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.630077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.630122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.645706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.645775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.661926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.661981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.677950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.677995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.696928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.696998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.711604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.711680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.722182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.722213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.737103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.737140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.753065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.753103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.769945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.769997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.787964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.788002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.802482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.802515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.819392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.819424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.834969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.835067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.846360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.846411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.863403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.863459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.877171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.877225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.894193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.894259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.909551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.909593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.918493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.918534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.934382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.934424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.952409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.952444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.966630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.966707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.983092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.983161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:20.998877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:20.998954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.016788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.016851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.031924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.031979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.043197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.043232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.058769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.058807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.076253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.076316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.092236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.092294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.109532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.109589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.125525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.125581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.142908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.142972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.160434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.160492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.176923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.176993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.192477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.192537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.210347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.210405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.226785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.226845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.242729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.242823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.260203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.260261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.275098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.275153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.289730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.289784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.306034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.306102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.321770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.321825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.339129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.339183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.355671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.355749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.372103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.372167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.388450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.388500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.404962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.405030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.422155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.422221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.437427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.437481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.452945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.453025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.463300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.463348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.478082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.478152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.494039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.494141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.505499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.505552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.518158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.901 [2024-07-15 22:36:21.518202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.901 [2024-07-15 22:36:21.532909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.532964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.542871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.542995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.557236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.557308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.572597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.572663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.591493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.591567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.606056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.606113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.617132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.617182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.632447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.632504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.649036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.649105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.665675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.665775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.681160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.681228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.692214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.692274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.708785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.708840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.902 [2024-07-15 22:36:21.724617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.902 [2024-07-15 22:36:21.724671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.735366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.735418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.751311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.751366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.768072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.768127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.783027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.783126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.797830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.797896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.813095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.813158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.828104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.828173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.844470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.844539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.861022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.861075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.878676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.878763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.894424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.894481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.905913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.905972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.922782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.922842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.938052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.938117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.947816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.947898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.962884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.962950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.160 [2024-07-15 22:36:21.978245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.160 [2024-07-15 22:36:21.978326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:21.995993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:21.996059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:22.013061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:22.013103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:22.027843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:22.027893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:22.037316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:22.037347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:22.052366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:22.052410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.418 [2024-07-15 22:36:22.067353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.418 [2024-07-15 22:36:22.067411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.076739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.076773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.092189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.092222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.107855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.107923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.126556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.126623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.141306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.141369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.152971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.153013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.169857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.169916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.184912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.184962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.196581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.196637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.212499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.212564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.229189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.229277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.419 [2024-07-15 22:36:22.245351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.419 [2024-07-15 22:36:22.245408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.262152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.262221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.278442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.278496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.295429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.295481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.312966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.313019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.327957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.328025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.344245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.344315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.360491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.360555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.376981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.377047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.393738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.393800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.410848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.410950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.427207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.427261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.443859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-15 22:36:22.443955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-15 22:36:22.460730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.677 [2024-07-15 22:36:22.460793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.677 [2024-07-15 22:36:22.476583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.677 [2024-07-15 22:36:22.476632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.677 [2024-07-15 22:36:22.494594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.677 [2024-07-15 22:36:22.494697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.511036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.511069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.529109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.529137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.544044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.544080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.553911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.553960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.571329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.571361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.585532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.585562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.601363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.601395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.611539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.611571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.628046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.628130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.644631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.644685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.661153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.661208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.934 [2024-07-15 22:36:22.670940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.934 [2024-07-15 22:36:22.670989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-15 22:36:22.686821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-15 22:36:22.686856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-15 22:36:22.704379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-15 22:36:22.704410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-15 22:36:22.721380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-15 22:36:22.721423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-15 22:36:22.737922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-15 22:36:22.737981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-15 22:36:22.756766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-15 22:36:22.756800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.770448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.770482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.785989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.786023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.805245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.805279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.819115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.819150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.835193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.835232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.852228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.852265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.868729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.868762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.884838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.884903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.901653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.901686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.918769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.918813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.935667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.935716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.951932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.951963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.969923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.969965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.984243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.984277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:22.999637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:22.999696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.192 [2024-07-15 22:36:23.017163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.192 [2024-07-15 22:36:23.017193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.449 [2024-07-15 22:36:23.031793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.449 [2024-07-15 22:36:23.031859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.449 [2024-07-15 22:36:23.046812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.449 [2024-07-15 22:36:23.046846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.449 [2024-07-15 22:36:23.056799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.449 [2024-07-15 22:36:23.056845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.449 [2024-07-15 22:36:23.073136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.449 [2024-07-15 22:36:23.073168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.089885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.089944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.106854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.106894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.121876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.121925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.136894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.136940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.153985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.154044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.171073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.171119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.187668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.187717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.203955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.203988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.221563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.221594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.236851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.236915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.251725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.251784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.450 [2024-07-15 22:36:23.267600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.450 [2024-07-15 22:36:23.267635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.707 [2024-07-15 22:36:23.283935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.707 [2024-07-15 22:36:23.284006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.707 [2024-07-15 22:36:23.301427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.707 [2024-07-15 22:36:23.301469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.707 [2024-07-15 22:36:23.317056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.707 [2024-07-15 22:36:23.317118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.707 [2024-07-15 22:36:23.335482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.707 [2024-07-15 22:36:23.335557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.707 [2024-07-15 22:36:23.349774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.707 [2024-07-15 22:36:23.349844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.365875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.365950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.382166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.382205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.399097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.399142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.414271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.414344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.432188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.432225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.448541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.448585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.464518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.464565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.475012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.475060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.490818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.490854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.505120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.505153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.521319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.521373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.708 [2024-07-15 22:36:23.537192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.708 [2024-07-15 22:36:23.537239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.555996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.556051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.566416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.566450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.583673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.583708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.601376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.601410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.615172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.615219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.631999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.632033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.646600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.646668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.662997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.663079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.678117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.678170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.694336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.694405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.710291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.710367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.727528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.727567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.742249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.742321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.758422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.758488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.774831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.774902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.966 [2024-07-15 22:36:23.790732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.966 [2024-07-15 22:36:23.790786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.808679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.808725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.823863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.823906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.833178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.833214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.848375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.848441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.864443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.864517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.881904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.881959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.897980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.898026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.915666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.915703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.929964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.930001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.946099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.946134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.962763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.962838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.980200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.980249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:23.996074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:23.996112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:24.014191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:24.014227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:24.029984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:24.030018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.224 [2024-07-15 22:36:24.048279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.224 [2024-07-15 22:36:24.048325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.064010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.064049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.082345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.082404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.101405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.101468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.118531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.118571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.134863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.134913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.151555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.151603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.167827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.167911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.185011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.185046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.200531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.200582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.219792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.219848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.233674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.233713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.249107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.249147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.267968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.268018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.282084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.282138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.297348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.297391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.505 [2024-07-15 22:36:24.308645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.505 [2024-07-15 22:36:24.308679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.795 [2024-07-15 22:36:24.324417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.795 [2024-07-15 22:36:24.324453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.795 [2024-07-15 22:36:24.341903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.795 [2024-07-15 22:36:24.341955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.795 [2024-07-15 22:36:24.357592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.795 [2024-07-15 22:36:24.357652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.795 [2024-07-15 22:36:24.376892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.376936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.390980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.391016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.406750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.406786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.424365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.424411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.439282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.439318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.450517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.450552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.466403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.466443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.481301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.481354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.496910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.496966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.513320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.513355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.530530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.530597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.546347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.546407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.564282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.564337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.579816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.579899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.597741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.597791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.796 [2024-07-15 22:36:24.615171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.796 [2024-07-15 22:36:24.615217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.054 [2024-07-15 22:36:24.635149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.054 [2024-07-15 22:36:24.635192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.054 [2024-07-15 22:36:24.651744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.054 [2024-07-15 22:36:24.651815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.054 [2024-07-15 22:36:24.668284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.054 [2024-07-15 22:36:24.668332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.685583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.685622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.701992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.702027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.719609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.719645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.735688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.735748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.752771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.752811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.767933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.768000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.778935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.778981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.794433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.794482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.811646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.811684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.828209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.828246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.844193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.844228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.862104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.862166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.055 [2024-07-15 22:36:24.876241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.055 [2024-07-15 22:36:24.876304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.892473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.892522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.909461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.909498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.926525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.926565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.941595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.941668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.951579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.951621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.968318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.968385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:24.984357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:24.984410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.002667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.002710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.016506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.016550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.032096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.032132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.049344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.049383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.064521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.064566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.080337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.080370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.098150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.098190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.112103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.112140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.127895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.127941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.314 [2024-07-15 22:36:25.144757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.314 [2024-07-15 22:36:25.144824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.159906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.159958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.171074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.171134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.186729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.186782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.203429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.203469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.221104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.221143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.236888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.236923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.254190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.254229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.269620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.269654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.288139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.288195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.302486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.302523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.317955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.317989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.336821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.336904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.351355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.351417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.368200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.368258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.384063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.384131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.572 [2024-07-15 22:36:25.402699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.572 [2024-07-15 22:36:25.402749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.831 00:10:07.831 Latency(us) 00:10:07.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.831 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:07.831 Nvme1n1 : 5.01 12259.89 95.78 0.00 0.00 10426.52 4140.68 25261.15 00:10:07.831 =================================================================================================================== 00:10:07.831 Total : 12259.89 95.78 0.00 0.00 10426.52 4140.68 25261.15 00:10:07.831 [2024-07-15 22:36:25.413329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.831 [2024-07-15 22:36:25.413362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.831 [2024-07-15 22:36:25.425334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.831 [2024-07-15 22:36:25.425362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.831 [2024-07-15 22:36:25.437316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.831 [2024-07-15 22:36:25.437342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.831 [2024-07-15 22:36:25.449337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.831 [2024-07-15 22:36:25.449376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.461316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.461342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.473316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.473340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.485350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.485380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.497364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.497394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.509353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.509381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.521365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.521391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.533349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.533376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.545363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.545389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.557376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.557403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.569374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.569402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.581373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.581410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.593356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.593380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.605369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.605391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.617360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.617384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.629376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.629405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.641379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.641404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.653366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.653388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.832 [2024-07-15 22:36:25.665391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.832 [2024-07-15 22:36:25.665416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.677380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.677405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.689403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.689438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.701382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.701405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.713386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.713410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.725391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.725438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.737397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.737434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 [2024-07-15 22:36:25.749413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.099 [2024-07-15 22:36:25.749448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.099 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67905) - No such process 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67905 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 delay0 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.099 22:36:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:08.100 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.100 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.100 22:36:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.100 22:36:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:08.358 [2024-07-15 22:36:25.943708] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.957 Initializing NVMe Controllers 00:10:14.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:14.957 Initialization complete. Launching workers. 00:10:14.957 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 63 00:10:14.957 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 350, failed to submit 33 00:10:14.957 success 231, unsuccess 119, failed 0 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.957 rmmod nvme_tcp 00:10:14.957 rmmod nvme_fabrics 00:10:14.957 rmmod nvme_keyring 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67750 ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67750 ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:14.957 killing process with pid 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67750' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67750 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:14.957 00:10:14.957 real 0m25.080s 00:10:14.957 user 0m40.402s 00:10:14.957 sys 0m7.604s 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.957 22:36:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.957 ************************************ 00:10:14.957 END TEST nvmf_zcopy 00:10:14.958 ************************************ 00:10:14.958 22:36:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:14.958 22:36:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:14.958 22:36:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:14.958 22:36:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.958 22:36:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.958 ************************************ 00:10:14.958 START TEST nvmf_nmic 00:10:14.958 ************************************ 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:14.958 * Looking for test storage... 00:10:14.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:14.958 Cannot find device "nvmf_tgt_br" 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.958 Cannot find device "nvmf_tgt_br2" 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:14.958 Cannot find device "nvmf_tgt_br" 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:14.958 Cannot find device "nvmf_tgt_br2" 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:14.958 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:15.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:10:15.239 00:10:15.239 --- 10.0.0.2 ping statistics --- 00:10:15.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.239 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:15.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:15.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:15.239 00:10:15.239 --- 10.0.0.3 ping statistics --- 00:10:15.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.239 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:15.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:15.239 00:10:15.239 --- 10.0.0.1 ping statistics --- 00:10:15.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.239 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:15.239 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68231 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68231 00:10:15.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68231 ']' 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.240 22:36:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.240 [2024-07-15 22:36:32.976599] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:15.240 [2024-07-15 22:36:32.976677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.499 [2024-07-15 22:36:33.116097] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.499 [2024-07-15 22:36:33.268692] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.499 [2024-07-15 22:36:33.269074] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.499 [2024-07-15 22:36:33.269245] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.499 [2024-07-15 22:36:33.269398] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.499 [2024-07-15 22:36:33.269440] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.499 [2024-07-15 22:36:33.269974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.499 [2024-07-15 22:36:33.270146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.499 [2024-07-15 22:36:33.270276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.499 [2024-07-15 22:36:33.270282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.758 [2024-07-15 22:36:33.352144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:16.325 22:36:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.325 22:36:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:16.325 22:36:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.325 22:36:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.325 22:36:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 [2024-07-15 22:36:34.032512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 Malloc0 00:10:16.325 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 [2024-07-15 22:36:34.113473] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.326 test case1: single bdev can't be used in multiple subsystems 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 [2024-07-15 22:36:34.145378] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:16.326 [2024-07-15 22:36:34.145412] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:16.326 [2024-07-15 22:36:34.145423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 request: 00:10:16.326 { 00:10:16.326 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:16.326 "namespace": { 00:10:16.326 "bdev_name": "Malloc0", 00:10:16.326 "no_auto_visible": false 00:10:16.326 }, 00:10:16.326 "method": "nvmf_subsystem_add_ns", 00:10:16.326 "req_id": 1 00:10:16.326 } 00:10:16.326 Got JSON-RPC error response 00:10:16.326 response: 00:10:16.326 { 00:10:16.326 "code": -32602, 00:10:16.326 "message": "Invalid parameters" 00:10:16.326 } 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:16.326 Adding namespace failed - expected result. 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:16.326 test case2: host connect to nvmf target in multiple paths 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.326 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 [2024-07-15 22:36:34.161551] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:16.585 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.585 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.585 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:16.844 22:36:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.844 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.844 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.844 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:16.844 22:36:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:18.744 22:36:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.744 [global] 00:10:18.744 thread=1 00:10:18.744 invalidate=1 00:10:18.744 rw=write 00:10:18.744 time_based=1 00:10:18.744 runtime=1 00:10:18.744 ioengine=libaio 00:10:18.744 direct=1 00:10:18.744 bs=4096 00:10:18.744 iodepth=1 00:10:18.744 norandommap=0 00:10:18.744 numjobs=1 00:10:18.744 00:10:18.744 verify_dump=1 00:10:18.744 verify_backlog=512 00:10:18.744 verify_state_save=0 00:10:18.744 do_verify=1 00:10:18.744 verify=crc32c-intel 00:10:18.744 [job0] 00:10:18.744 filename=/dev/nvme0n1 00:10:18.744 Could not set queue depth (nvme0n1) 00:10:19.002 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.002 fio-3.35 00:10:19.002 Starting 1 thread 00:10:19.939 00:10:19.939 job0: (groupid=0, jobs=1): err= 0: pid=68317: Mon Jul 15 22:36:37 2024 00:10:19.939 read: IOPS=2353, BW=9415KiB/s (9641kB/s)(9424KiB/1001msec) 00:10:19.939 slat (nsec): min=12190, max=61653, avg=16433.88, stdev=5764.16 00:10:19.939 clat (usec): min=152, max=725, avg=221.33, stdev=35.74 00:10:19.939 lat (usec): min=166, max=747, avg=237.76, stdev=37.10 00:10:19.939 clat percentiles (usec): 00:10:19.939 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:10:19.939 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:10:19.939 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 277], 00:10:19.939 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 515], 99.95th=[ 586], 00:10:19.939 | 99.99th=[ 725] 00:10:19.939 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:19.939 slat (usec): min=17, max=135, avg=26.19, stdev=11.14 00:10:19.939 clat (usec): min=71, max=7940, avg=141.98, stdev=230.82 00:10:19.939 lat (usec): min=103, max=7965, avg=168.16, stdev=231.94 00:10:19.939 clat percentiles (usec): 00:10:19.939 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 102], 20.00th=[ 111], 00:10:19.939 | 30.00th=[ 116], 40.00th=[ 122], 50.00th=[ 128], 60.00th=[ 135], 00:10:19.939 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 167], 95.00th=[ 182], 00:10:19.939 | 99.00th=[ 221], 99.50th=[ 314], 99.90th=[ 3163], 99.95th=[ 7177], 00:10:19.939 | 99.99th=[ 7963] 00:10:19.939 bw ( KiB/s): min=10488, max=10488, per=100.00%, avg=10488.00, stdev= 0.00, samples=1 00:10:19.939 iops : min= 2622, max= 2622, avg=2622.00, stdev= 0.00, samples=1 00:10:19.939 lat (usec) : 100=4.05%, 250=87.02%, 500=8.67%, 750=0.08%, 1000=0.06% 00:10:19.939 lat (msec) : 4=0.08%, 10=0.04% 00:10:19.939 cpu : usr=2.10%, sys=8.00%, ctx=4923, majf=0, minf=2 00:10:19.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.939 issued rwts: total=2356,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.939 00:10:19.939 Run status group 0 (all jobs): 00:10:19.939 READ: bw=9415KiB/s (9641kB/s), 9415KiB/s-9415KiB/s (9641kB/s-9641kB/s), io=9424KiB (9650kB), run=1001-1001msec 00:10:19.939 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:19.939 00:10:19.939 Disk stats (read/write): 00:10:19.939 nvme0n1: ios=2098/2357, merge=0/0, ticks=487/345, in_queue=832, util=90.28% 00:10:19.939 22:36:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:20.197 22:36:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.197 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.198 rmmod nvme_tcp 00:10:20.198 rmmod nvme_fabrics 00:10:20.198 rmmod nvme_keyring 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68231 ']' 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68231 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68231 ']' 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68231 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68231 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.198 killing process with pid 68231 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68231' 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68231 00:10:20.198 22:36:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68231 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:20.766 00:10:20.766 real 0m5.907s 00:10:20.766 user 0m18.932s 00:10:20.766 sys 0m2.066s 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.766 ************************************ 00:10:20.766 END TEST nvmf_nmic 00:10:20.766 22:36:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.766 ************************************ 00:10:20.766 22:36:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.766 22:36:38 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.766 22:36:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.766 22:36:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.766 22:36:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.766 ************************************ 00:10:20.766 START TEST nvmf_fio_target 00:10:20.766 ************************************ 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.766 * Looking for test storage... 00:10:20.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.766 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:20.767 Cannot find device "nvmf_tgt_br" 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.767 Cannot find device "nvmf_tgt_br2" 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:20.767 Cannot find device "nvmf_tgt_br" 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:20.767 Cannot find device "nvmf_tgt_br2" 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:20.767 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:21.025 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:21.025 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.025 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:21.025 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.026 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:21.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:21.285 00:10:21.285 --- 10.0.0.2 ping statistics --- 00:10:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.285 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:21.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:21.285 00:10:21.285 --- 10.0.0.3 ping statistics --- 00:10:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.285 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:21.285 00:10:21.285 --- 10.0.0.1 ping statistics --- 00:10:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.285 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68501 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68501 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68501 ']' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.285 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.286 22:36:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.286 [2024-07-15 22:36:38.980443] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:21.286 [2024-07-15 22:36:38.980560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.544 [2024-07-15 22:36:39.123257] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.544 [2024-07-15 22:36:39.306481] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.544 [2024-07-15 22:36:39.306782] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.544 [2024-07-15 22:36:39.306974] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.544 [2024-07-15 22:36:39.307129] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.544 [2024-07-15 22:36:39.307171] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.544 [2024-07-15 22:36:39.307498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.544 [2024-07-15 22:36:39.307642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.544 [2024-07-15 22:36:39.307701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.544 [2024-07-15 22:36:39.307705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.800 [2024-07-15 22:36:39.388062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.365 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.624 [2024-07-15 22:36:40.332329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.624 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.880 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.880 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.138 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:23.138 22:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.394 22:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:23.394 22:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.962 22:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:23.962 22:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.962 22:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.219 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:24.219 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.785 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:24.785 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.042 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:25.043 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:25.299 22:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.299 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.299 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.556 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.556 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.815 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.073 [2024-07-15 22:36:43.815074] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.073 22:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:26.332 22:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:26.591 22:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:26.849 22:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:28.754 22:36:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.754 [global] 00:10:28.754 thread=1 00:10:28.754 invalidate=1 00:10:28.754 rw=write 00:10:28.754 time_based=1 00:10:28.754 runtime=1 00:10:28.754 ioengine=libaio 00:10:28.754 direct=1 00:10:28.754 bs=4096 00:10:28.754 iodepth=1 00:10:28.754 norandommap=0 00:10:28.754 numjobs=1 00:10:28.754 00:10:28.754 verify_dump=1 00:10:28.754 verify_backlog=512 00:10:28.754 verify_state_save=0 00:10:28.754 do_verify=1 00:10:28.754 verify=crc32c-intel 00:10:28.754 [job0] 00:10:28.754 filename=/dev/nvme0n1 00:10:28.754 [job1] 00:10:28.754 filename=/dev/nvme0n2 00:10:28.754 [job2] 00:10:28.754 filename=/dev/nvme0n3 00:10:28.754 [job3] 00:10:28.754 filename=/dev/nvme0n4 00:10:29.012 Could not set queue depth (nvme0n1) 00:10:29.012 Could not set queue depth (nvme0n2) 00:10:29.012 Could not set queue depth (nvme0n3) 00:10:29.012 Could not set queue depth (nvme0n4) 00:10:29.012 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.012 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.012 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.012 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.012 fio-3.35 00:10:29.012 Starting 4 threads 00:10:30.389 00:10:30.389 job0: (groupid=0, jobs=1): err= 0: pid=68686: Mon Jul 15 22:36:47 2024 00:10:30.389 read: IOPS=1422, BW=5690KiB/s (5827kB/s)(5696KiB/1001msec) 00:10:30.389 slat (nsec): min=10669, max=56740, avg=14817.09, stdev=4535.40 00:10:30.389 clat (usec): min=239, max=3126, avg=356.95, stdev=90.62 00:10:30.389 lat (usec): min=252, max=3159, avg=371.77, stdev=91.19 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 318], 00:10:30.389 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 363], 00:10:30.389 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 424], 00:10:30.389 | 99.00th=[ 461], 99.50th=[ 523], 99.90th=[ 1205], 99.95th=[ 3130], 00:10:30.389 | 99.99th=[ 3130] 00:10:30.389 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:30.389 slat (usec): min=13, max=164, avg=22.09, stdev= 5.81 00:10:30.389 clat (usec): min=194, max=461, avg=281.19, stdev=37.53 00:10:30.389 lat (usec): min=213, max=482, avg=303.29, stdev=37.86 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 251], 00:10:30.389 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:10:30.389 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:10:30.389 | 99.00th=[ 408], 99.50th=[ 437], 99.90th=[ 453], 99.95th=[ 461], 00:10:30.389 | 99.99th=[ 461] 00:10:30.389 bw ( KiB/s): min= 8144, max= 8144, per=27.17%, avg=8144.00, stdev= 0.00, samples=1 00:10:30.389 iops : min= 2036, max= 2036, avg=2036.00, stdev= 0.00, samples=1 00:10:30.389 lat (usec) : 250=10.24%, 500=89.49%, 750=0.14%, 1000=0.03% 00:10:30.389 lat (msec) : 2=0.07%, 4=0.03% 00:10:30.389 cpu : usr=1.20%, sys=4.50%, ctx=2960, majf=0, minf=7 00:10:30.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 issued rwts: total=1424,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.389 job1: (groupid=0, jobs=1): err= 0: pid=68687: Mon Jul 15 22:36:47 2024 00:10:30.389 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:30.389 slat (nsec): min=12736, max=55175, avg=17111.90, stdev=5047.44 00:10:30.389 clat (usec): min=180, max=1468, avg=236.94, stdev=39.49 00:10:30.389 lat (usec): min=195, max=1491, avg=254.05, stdev=39.93 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:10:30.389 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:10:30.389 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:10:30.389 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 652], 99.95th=[ 660], 00:10:30.389 | 99.99th=[ 1467] 00:10:30.389 write: IOPS=2379, BW=9518KiB/s (9747kB/s)(9528KiB/1001msec); 0 zone resets 00:10:30.389 slat (usec): min=16, max=145, avg=25.97, stdev= 8.33 00:10:30.389 clat (usec): min=116, max=578, avg=171.79, stdev=27.66 00:10:30.389 lat (usec): min=138, max=600, avg=197.77, stdev=29.39 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 127], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 151], 00:10:30.389 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:10:30.389 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 219], 00:10:30.389 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 322], 99.95th=[ 383], 00:10:30.389 | 99.99th=[ 578] 00:10:30.389 bw ( KiB/s): min= 9080, max= 9080, per=30.29%, avg=9080.00, stdev= 0.00, samples=1 00:10:30.389 iops : min= 2270, max= 2270, avg=2270.00, stdev= 0.00, samples=1 00:10:30.389 lat (usec) : 250=87.22%, 500=12.69%, 750=0.07% 00:10:30.389 lat (msec) : 2=0.02% 00:10:30.389 cpu : usr=2.10%, sys=7.30%, ctx=4430, majf=0, minf=13 00:10:30.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 issued rwts: total=2048,2382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.389 job2: (groupid=0, jobs=1): err= 0: pid=68689: Mon Jul 15 22:36:47 2024 00:10:30.389 read: IOPS=2027, BW=8112KiB/s (8307kB/s)(8120KiB/1001msec) 00:10:30.389 slat (nsec): min=12176, max=56822, avg=16874.39, stdev=5353.36 00:10:30.389 clat (usec): min=180, max=2554, avg=246.83, stdev=59.17 00:10:30.389 lat (usec): min=193, max=2592, avg=263.70, stdev=60.01 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 227], 00:10:30.389 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:10:30.389 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:30.389 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 660], 99.95th=[ 881], 00:10:30.389 | 99.99th=[ 2540] 00:10:30.389 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:30.389 slat (usec): min=14, max=102, avg=25.23, stdev= 6.69 00:10:30.389 clat (usec): min=126, max=331, avg=197.81, stdev=23.84 00:10:30.389 lat (usec): min=149, max=434, avg=223.04, stdev=25.33 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 178], 00:10:30.389 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:30.389 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 243], 00:10:30.389 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:10:30.389 | 99.99th=[ 330] 00:10:30.389 bw ( KiB/s): min= 8192, max= 8192, per=27.33%, avg=8192.00, stdev= 0.00, samples=1 00:10:30.389 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:30.389 lat (usec) : 250=79.57%, 500=20.35%, 750=0.02%, 1000=0.02% 00:10:30.389 lat (msec) : 4=0.02% 00:10:30.389 cpu : usr=1.70%, sys=6.70%, ctx=4084, majf=0, minf=4 00:10:30.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 issued rwts: total=2030,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.389 job3: (groupid=0, jobs=1): err= 0: pid=68693: Mon Jul 15 22:36:47 2024 00:10:30.389 read: IOPS=1423, BW=5694KiB/s (5831kB/s)(5700KiB/1001msec) 00:10:30.389 slat (nsec): min=11236, max=63925, avg=21403.46, stdev=5390.02 00:10:30.389 clat (usec): min=212, max=3057, avg=349.50, stdev=88.81 00:10:30.389 lat (usec): min=239, max=3072, avg=370.90, stdev=88.93 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:10:30.389 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:10:30.389 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 416], 00:10:30.389 | 99.00th=[ 449], 99.50th=[ 494], 99.90th=[ 1287], 99.95th=[ 3064], 00:10:30.389 | 99.99th=[ 3064] 00:10:30.389 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:30.389 slat (nsec): min=22653, max=70413, avg=31301.27, stdev=6120.57 00:10:30.389 clat (usec): min=189, max=435, avg=271.33, stdev=35.69 00:10:30.389 lat (usec): min=214, max=472, avg=302.63, stdev=36.53 00:10:30.389 clat percentiles (usec): 00:10:30.389 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 243], 00:10:30.389 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:10:30.389 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 330], 00:10:30.389 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 437], 00:10:30.389 | 99.99th=[ 437] 00:10:30.389 bw ( KiB/s): min= 8144, max= 8144, per=27.17%, avg=8144.00, stdev= 0.00, samples=1 00:10:30.389 iops : min= 2036, max= 2036, avg=2036.00, stdev= 0.00, samples=1 00:10:30.389 lat (usec) : 250=14.96%, 500=84.80%, 750=0.10%, 1000=0.03% 00:10:30.389 lat (msec) : 2=0.07%, 4=0.03% 00:10:30.389 cpu : usr=1.20%, sys=7.10%, ctx=2961, majf=0, minf=11 00:10:30.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.389 issued rwts: total=1425,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.389 00:10:30.389 Run status group 0 (all jobs): 00:10:30.389 READ: bw=27.0MiB/s (28.3MB/s), 5690KiB/s-8184KiB/s (5827kB/s-8380kB/s), io=27.1MiB (28.4MB), run=1001-1001msec 00:10:30.389 WRITE: bw=29.3MiB/s (30.7MB/s), 6138KiB/s-9518KiB/s (6285kB/s-9747kB/s), io=29.3MiB (30.7MB), run=1001-1001msec 00:10:30.389 00:10:30.389 Disk stats (read/write): 00:10:30.389 nvme0n1: ios=1088/1536, merge=0/0, ticks=373/393, in_queue=766, util=87.86% 00:10:30.389 nvme0n2: ios=1776/2048, merge=0/0, ticks=465/386, in_queue=851, util=89.45% 00:10:30.389 nvme0n3: ios=1536/1982, merge=0/0, ticks=397/413, in_queue=810, util=89.03% 00:10:30.389 nvme0n4: ios=1038/1536, merge=0/0, ticks=371/442, in_queue=813, util=89.78% 00:10:30.389 22:36:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:30.389 [global] 00:10:30.389 thread=1 00:10:30.389 invalidate=1 00:10:30.389 rw=randwrite 00:10:30.389 time_based=1 00:10:30.389 runtime=1 00:10:30.389 ioengine=libaio 00:10:30.389 direct=1 00:10:30.389 bs=4096 00:10:30.389 iodepth=1 00:10:30.389 norandommap=0 00:10:30.389 numjobs=1 00:10:30.389 00:10:30.389 verify_dump=1 00:10:30.389 verify_backlog=512 00:10:30.389 verify_state_save=0 00:10:30.389 do_verify=1 00:10:30.389 verify=crc32c-intel 00:10:30.389 [job0] 00:10:30.389 filename=/dev/nvme0n1 00:10:30.389 [job1] 00:10:30.389 filename=/dev/nvme0n2 00:10:30.389 [job2] 00:10:30.389 filename=/dev/nvme0n3 00:10:30.389 [job3] 00:10:30.389 filename=/dev/nvme0n4 00:10:30.389 Could not set queue depth (nvme0n1) 00:10:30.389 Could not set queue depth (nvme0n2) 00:10:30.389 Could not set queue depth (nvme0n3) 00:10:30.389 Could not set queue depth (nvme0n4) 00:10:30.389 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.389 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.389 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.389 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.389 fio-3.35 00:10:30.389 Starting 4 threads 00:10:31.765 00:10:31.765 job0: (groupid=0, jobs=1): err= 0: pid=68753: Mon Jul 15 22:36:49 2024 00:10:31.765 read: IOPS=2031, BW=8128KiB/s (8323kB/s)(8136KiB/1001msec) 00:10:31.765 slat (usec): min=9, max=118, avg=19.60, stdev= 8.11 00:10:31.765 clat (usec): min=164, max=790, avg=238.45, stdev=38.12 00:10:31.765 lat (usec): min=184, max=800, avg=258.05, stdev=38.11 00:10:31.765 clat percentiles (usec): 00:10:31.765 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:10:31.765 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:10:31.765 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 297], 00:10:31.765 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 603], 99.95th=[ 758], 00:10:31.765 | 99.99th=[ 791] 00:10:31.765 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:31.765 slat (usec): min=14, max=121, avg=29.43, stdev=11.29 00:10:31.765 clat (usec): min=112, max=7444, avg=198.04, stdev=289.66 00:10:31.765 lat (usec): min=136, max=7464, avg=227.47, stdev=289.94 00:10:31.765 clat percentiles (usec): 00:10:31.765 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:10:31.765 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:31.765 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 229], 95.00th=[ 255], 00:10:31.765 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 6390], 99.95th=[ 6587], 00:10:31.765 | 99.99th=[ 7439] 00:10:31.765 bw ( KiB/s): min= 9320, max= 9320, per=31.65%, avg=9320.00, stdev= 0.00, samples=1 00:10:31.765 iops : min= 2330, max= 2330, avg=2330.00, stdev= 0.00, samples=1 00:10:31.765 lat (usec) : 250=81.99%, 500=17.74%, 750=0.02%, 1000=0.10% 00:10:31.765 lat (msec) : 4=0.05%, 10=0.10% 00:10:31.765 cpu : usr=2.20%, sys=8.00%, ctx=4085, majf=0, minf=9 00:10:31.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.765 issued rwts: total=2034,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.765 job1: (groupid=0, jobs=1): err= 0: pid=68754: Mon Jul 15 22:36:49 2024 00:10:31.765 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:31.765 slat (nsec): min=10778, max=48106, avg=16191.14, stdev=4407.15 00:10:31.765 clat (usec): min=221, max=1343, avg=329.44, stdev=58.66 00:10:31.765 lat (usec): min=234, max=1355, avg=345.63, stdev=60.35 00:10:31.765 clat percentiles (usec): 00:10:31.765 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 269], 00:10:31.765 | 30.00th=[ 293], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 351], 00:10:31.765 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 392], 95.00th=[ 404], 00:10:31.765 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 668], 99.95th=[ 1336], 00:10:31.765 | 99.99th=[ 1336] 00:10:31.765 write: IOPS=1537, BW=6150KiB/s (6297kB/s)(6156KiB/1001msec); 0 zone resets 00:10:31.765 slat (usec): min=13, max=119, avg=21.62, stdev= 6.60 00:10:31.765 clat (usec): min=139, max=550, avg=279.39, stdev=52.97 00:10:31.765 lat (usec): min=173, max=618, avg=301.01, stdev=54.87 00:10:31.765 clat percentiles (usec): 00:10:31.765 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 210], 20.00th=[ 231], 00:10:31.765 | 30.00th=[ 251], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:10:31.765 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 363], 00:10:31.765 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 486], 99.95th=[ 553], 00:10:31.765 | 99.99th=[ 553] 00:10:31.765 bw ( KiB/s): min= 7488, max= 7488, per=25.43%, avg=7488.00, stdev= 0.00, samples=1 00:10:31.765 iops : min= 1872, max= 1872, avg=1872.00, stdev= 0.00, samples=1 00:10:31.765 lat (usec) : 250=18.37%, 500=81.53%, 750=0.07% 00:10:31.765 lat (msec) : 2=0.03% 00:10:31.765 cpu : usr=1.50%, sys=4.90%, ctx=3075, majf=0, minf=7 00:10:31.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.765 issued rwts: total=1536,1539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.765 job2: (groupid=0, jobs=1): err= 0: pid=68755: Mon Jul 15 22:36:49 2024 00:10:31.765 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:31.765 slat (nsec): min=11093, max=54463, avg=16614.03, stdev=4272.31 00:10:31.765 clat (usec): min=217, max=1314, avg=329.29, stdev=60.31 00:10:31.765 lat (usec): min=232, max=1330, avg=345.90, stdev=60.10 00:10:31.765 clat percentiles (usec): 00:10:31.765 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:10:31.765 | 30.00th=[ 293], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 351], 00:10:31.765 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 408], 00:10:31.765 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 603], 99.95th=[ 1319], 00:10:31.766 | 99.99th=[ 1319] 00:10:31.766 write: IOPS=1536, BW=6146KiB/s (6293kB/s)(6152KiB/1001msec); 0 zone resets 00:10:31.766 slat (usec): min=17, max=146, avg=31.28, stdev= 8.53 00:10:31.766 clat (usec): min=132, max=467, avg=269.06, stdev=50.38 00:10:31.766 lat (usec): min=156, max=571, avg=300.33, stdev=53.95 00:10:31.766 clat percentiles (usec): 00:10:31.766 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 223], 00:10:31.766 | 30.00th=[ 241], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 281], 00:10:31.766 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 351], 00:10:31.766 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 469], 00:10:31.766 | 99.99th=[ 469] 00:10:31.766 bw ( KiB/s): min= 7480, max= 7480, per=25.41%, avg=7480.00, stdev= 0.00, samples=1 00:10:31.766 iops : min= 1870, max= 1870, avg=1870.00, stdev= 0.00, samples=1 00:10:31.766 lat (usec) : 250=22.58%, 500=77.29%, 750=0.10% 00:10:31.766 lat (msec) : 2=0.03% 00:10:31.766 cpu : usr=1.80%, sys=6.30%, ctx=3074, majf=0, minf=17 00:10:31.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.766 issued rwts: total=1536,1538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.766 job3: (groupid=0, jobs=1): err= 0: pid=68756: Mon Jul 15 22:36:49 2024 00:10:31.766 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:31.766 slat (nsec): min=8634, max=60364, avg=14730.21, stdev=4982.74 00:10:31.766 clat (usec): min=176, max=845, avg=243.95, stdev=36.12 00:10:31.766 lat (usec): min=189, max=859, avg=258.68, stdev=36.61 00:10:31.766 clat percentiles (usec): 00:10:31.766 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 217], 00:10:31.766 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:10:31.766 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:10:31.766 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 404], 99.95th=[ 537], 00:10:31.766 | 99.99th=[ 848] 00:10:31.766 write: IOPS=2240, BW=8963KiB/s (9178kB/s)(8972KiB/1001msec); 0 zone resets 00:10:31.766 slat (usec): min=12, max=127, avg=22.50, stdev= 7.70 00:10:31.766 clat (usec): min=110, max=358, avg=183.96, stdev=34.08 00:10:31.766 lat (usec): min=131, max=425, avg=206.47, stdev=35.58 00:10:31.766 clat percentiles (usec): 00:10:31.766 | 1.00th=[ 123], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 157], 00:10:31.766 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:10:31.766 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 231], 95.00th=[ 251], 00:10:31.766 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 326], 99.95th=[ 343], 00:10:31.766 | 99.99th=[ 359] 00:10:31.766 bw ( KiB/s): min= 9448, max= 9448, per=32.09%, avg=9448.00, stdev= 0.00, samples=1 00:10:31.766 iops : min= 2362, max= 2362, avg=2362.00, stdev= 0.00, samples=1 00:10:31.766 lat (usec) : 250=79.66%, 500=20.30%, 750=0.02%, 1000=0.02% 00:10:31.766 cpu : usr=1.30%, sys=6.70%, ctx=4295, majf=0, minf=12 00:10:31.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.766 issued rwts: total=2048,2243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.766 00:10:31.766 Run status group 0 (all jobs): 00:10:31.766 READ: bw=27.9MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=27.9MiB (29.3MB), run=1001-1001msec 00:10:31.766 WRITE: bw=28.8MiB/s (30.1MB/s), 6146KiB/s-8963KiB/s (6293kB/s-9178kB/s), io=28.8MiB (30.2MB), run=1001-1001msec 00:10:31.766 00:10:31.766 Disk stats (read/write): 00:10:31.766 nvme0n1: ios=1636/2048, merge=0/0, ticks=399/404, in_queue=803, util=87.76% 00:10:31.766 nvme0n2: ios=1130/1536, merge=0/0, ticks=407/368, in_queue=775, util=89.49% 00:10:31.766 nvme0n3: ios=1086/1536, merge=0/0, ticks=379/426, in_queue=805, util=89.44% 00:10:31.766 nvme0n4: ios=1764/2048, merge=0/0, ticks=454/383, in_queue=837, util=90.41% 00:10:31.766 22:36:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:31.766 [global] 00:10:31.766 thread=1 00:10:31.766 invalidate=1 00:10:31.766 rw=write 00:10:31.766 time_based=1 00:10:31.766 runtime=1 00:10:31.766 ioengine=libaio 00:10:31.766 direct=1 00:10:31.766 bs=4096 00:10:31.766 iodepth=128 00:10:31.766 norandommap=0 00:10:31.766 numjobs=1 00:10:31.766 00:10:31.766 verify_dump=1 00:10:31.766 verify_backlog=512 00:10:31.766 verify_state_save=0 00:10:31.766 do_verify=1 00:10:31.766 verify=crc32c-intel 00:10:31.766 [job0] 00:10:31.766 filename=/dev/nvme0n1 00:10:31.766 [job1] 00:10:31.766 filename=/dev/nvme0n2 00:10:31.766 [job2] 00:10:31.766 filename=/dev/nvme0n3 00:10:31.766 [job3] 00:10:31.766 filename=/dev/nvme0n4 00:10:31.766 Could not set queue depth (nvme0n1) 00:10:31.766 Could not set queue depth (nvme0n2) 00:10:31.766 Could not set queue depth (nvme0n3) 00:10:31.766 Could not set queue depth (nvme0n4) 00:10:31.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.766 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.766 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.766 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.766 fio-3.35 00:10:31.766 Starting 4 threads 00:10:33.141 00:10:33.141 job0: (groupid=0, jobs=1): err= 0: pid=68810: Mon Jul 15 22:36:50 2024 00:10:33.141 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec) 00:10:33.141 slat (usec): min=7, max=10745, avg=173.92, stdev=920.37 00:10:33.141 clat (usec): min=1445, max=45819, avg=22644.40, stdev=7061.72 00:10:33.141 lat (usec): min=4894, max=45844, avg=22818.32, stdev=7047.56 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[ 5407], 5.00th=[15139], 10.00th=[17171], 20.00th=[17957], 00:10:33.141 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19530], 60.00th=[22152], 00:10:33.141 | 70.00th=[26608], 80.00th=[28443], 90.00th=[29754], 95.00th=[37487], 00:10:33.141 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:33.141 | 99.99th=[45876] 00:10:33.141 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:33.141 slat (usec): min=10, max=10578, avg=152.56, stdev=749.20 00:10:33.141 clat (usec): min=11303, max=31842, avg=19607.55, stdev=4582.74 00:10:33.141 lat (usec): min=13778, max=32534, avg=19760.12, stdev=4562.36 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[13435], 5.00th=[14353], 10.00th=[14615], 20.00th=[15401], 00:10:33.141 | 30.00th=[16581], 40.00th=[17695], 50.00th=[19530], 60.00th=[20055], 00:10:33.141 | 70.00th=[20317], 80.00th=[22676], 90.00th=[28705], 95.00th=[29492], 00:10:33.141 | 99.00th=[31589], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:33.141 | 99.99th=[31851] 00:10:33.141 bw ( KiB/s): min=12288, max=12312, per=19.77%, avg=12300.00, stdev=16.97, samples=2 00:10:33.141 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:33.141 lat (msec) : 2=0.02%, 10=0.99%, 20=55.74%, 50=43.26% 00:10:33.141 cpu : usr=2.69%, sys=9.87%, ctx=188, majf=0, minf=13 00:10:33.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:33.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.141 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.141 job1: (groupid=0, jobs=1): err= 0: pid=68811: Mon Jul 15 22:36:50 2024 00:10:33.141 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:10:33.141 slat (usec): min=4, max=3545, avg=87.32, stdev=331.86 00:10:33.141 clat (usec): min=614, max=14959, avg=11470.20, stdev=1228.49 00:10:33.141 lat (usec): min=1993, max=15010, avg=11557.52, stdev=1255.20 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[ 5800], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11076], 00:10:33.141 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:33.141 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12911], 95.00th=[13173], 00:10:33.141 | 99.00th=[13829], 99.50th=[14353], 99.90th=[14746], 99.95th=[14877], 00:10:33.141 | 99.99th=[15008] 00:10:33.141 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:33.141 slat (usec): min=10, max=3273, avg=86.51, stdev=349.40 00:10:33.141 clat (usec): min=8412, max=15194, avg=11534.88, stdev=879.62 00:10:33.141 lat (usec): min=8434, max=15234, avg=11621.39, stdev=930.75 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:10:33.141 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:10:33.141 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12518], 95.00th=[13435], 00:10:33.141 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:10:33.141 | 99.99th=[15139] 00:10:33.141 bw ( KiB/s): min=22476, max=22624, per=36.24%, avg=22550.00, stdev=104.65, samples=2 00:10:33.141 iops : min= 5619, max= 5656, avg=5637.50, stdev=26.16, samples=2 00:10:33.141 lat (usec) : 750=0.01% 00:10:33.141 lat (msec) : 2=0.01%, 4=0.20%, 10=4.45%, 20=95.33% 00:10:33.141 cpu : usr=3.90%, sys=17.28%, ctx=545, majf=0, minf=12 00:10:33.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:33.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.141 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.141 job2: (groupid=0, jobs=1): err= 0: pid=68812: Mon Jul 15 22:36:50 2024 00:10:33.141 read: IOPS=4528, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1002msec) 00:10:33.141 slat (usec): min=5, max=4686, avg=107.29, stdev=419.14 00:10:33.141 clat (usec): min=671, max=20002, avg=13730.21, stdev=1779.96 00:10:33.141 lat (usec): min=3084, max=20041, avg=13837.50, stdev=1811.66 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[ 8160], 5.00th=[11207], 10.00th=[12256], 20.00th=[12911], 00:10:33.141 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:10:33.141 | 70.00th=[13960], 80.00th=[15401], 90.00th=[15795], 95.00th=[16319], 00:10:33.141 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19792], 99.95th=[19792], 00:10:33.141 | 99.99th=[20055] 00:10:33.141 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:33.141 slat (usec): min=11, max=4557, avg=102.80, stdev=394.83 00:10:33.141 clat (usec): min=9651, max=20314, avg=13929.13, stdev=1524.42 00:10:33.141 lat (usec): min=9677, max=20341, avg=14031.94, stdev=1562.10 00:10:33.141 clat percentiles (usec): 00:10:33.141 | 1.00th=[10814], 5.00th=[12256], 10.00th=[12518], 20.00th=[12780], 00:10:33.142 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:10:33.142 | 70.00th=[14353], 80.00th=[14877], 90.00th=[16319], 95.00th=[16712], 00:10:33.142 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:10:33.142 | 99.99th=[20317] 00:10:33.142 bw ( KiB/s): min=16392, max=20472, per=29.62%, avg=18432.00, stdev=2885.00, samples=2 00:10:33.142 iops : min= 4098, max= 5118, avg=4608.00, stdev=721.25, samples=2 00:10:33.142 lat (usec) : 750=0.01% 00:10:33.142 lat (msec) : 4=0.24%, 10=0.55%, 20=99.09%, 50=0.11% 00:10:33.142 cpu : usr=3.80%, sys=14.99%, ctx=566, majf=0, minf=11 00:10:33.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:33.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.142 issued rwts: total=4538,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.142 job3: (groupid=0, jobs=1): err= 0: pid=68813: Mon Jul 15 22:36:50 2024 00:10:33.142 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:10:33.142 slat (usec): min=6, max=6520, avg=203.45, stdev=847.55 00:10:33.142 clat (usec): min=18179, max=41394, avg=25804.41, stdev=4433.74 00:10:33.142 lat (usec): min=18203, max=42500, avg=26007.86, stdev=4503.20 00:10:33.142 clat percentiles (usec): 00:10:33.142 | 1.00th=[18744], 5.00th=[21103], 10.00th=[21365], 20.00th=[21627], 00:10:33.142 | 30.00th=[21890], 40.00th=[22938], 50.00th=[24511], 60.00th=[27132], 00:10:33.142 | 70.00th=[29230], 80.00th=[30016], 90.00th=[31065], 95.00th=[32375], 00:10:33.142 | 99.00th=[38011], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:10:33.142 | 99.99th=[41157] 00:10:33.142 write: IOPS=2310, BW=9242KiB/s (9464kB/s)(9288KiB/1005msec); 0 zone resets 00:10:33.142 slat (usec): min=14, max=11670, avg=242.18, stdev=885.31 00:10:33.142 clat (usec): min=3577, max=62391, avg=31527.96, stdev=12362.57 00:10:33.142 lat (usec): min=6488, max=62417, avg=31770.14, stdev=12438.07 00:10:33.142 clat percentiles (usec): 00:10:33.142 | 1.00th=[11338], 5.00th=[15795], 10.00th=[17171], 20.00th=[19530], 00:10:33.142 | 30.00th=[21890], 40.00th=[25560], 50.00th=[31065], 60.00th=[35390], 00:10:33.142 | 70.00th=[37487], 80.00th=[41681], 90.00th=[50070], 95.00th=[55837], 00:10:33.142 | 99.00th=[60031], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:10:33.142 | 99.99th=[62653] 00:10:33.142 bw ( KiB/s): min= 7168, max=10404, per=14.12%, avg=8786.00, stdev=2288.20, samples=2 00:10:33.142 iops : min= 1792, max= 2601, avg=2196.50, stdev=572.05, samples=2 00:10:33.142 lat (msec) : 4=0.02%, 10=0.37%, 20=11.49%, 50=82.61%, 100=5.51% 00:10:33.142 cpu : usr=2.49%, sys=7.87%, ctx=283, majf=0, minf=15 00:10:33.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:33.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.142 issued rwts: total=2048,2322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.142 00:10:33.142 Run status group 0 (all jobs): 00:10:33.142 READ: bw=57.8MiB/s (60.6MB/s), 8151KiB/s-20.9MiB/s (8347kB/s-22.0MB/s), io=58.1MiB (60.9MB), run=1002-1005msec 00:10:33.142 WRITE: bw=60.8MiB/s (63.7MB/s), 9242KiB/s-22.0MiB/s (9464kB/s-23.0MB/s), io=61.1MiB (64.0MB), run=1002-1005msec 00:10:33.142 00:10:33.142 Disk stats (read/write): 00:10:33.142 nvme0n1: ios=2514/2560, merge=0/0, ticks=13661/11347, in_queue=25008, util=88.28% 00:10:33.142 nvme0n2: ios=4650/4919, merge=0/0, ticks=16825/15474, in_queue=32299, util=88.57% 00:10:33.142 nvme0n3: ios=3990/4096, merge=0/0, ticks=17252/15960, in_queue=33212, util=90.13% 00:10:33.142 nvme0n4: ios=1745/2048, merge=0/0, ticks=14543/19961, in_queue=34504, util=89.76% 00:10:33.142 22:36:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:33.142 [global] 00:10:33.142 thread=1 00:10:33.142 invalidate=1 00:10:33.142 rw=randwrite 00:10:33.142 time_based=1 00:10:33.142 runtime=1 00:10:33.142 ioengine=libaio 00:10:33.142 direct=1 00:10:33.142 bs=4096 00:10:33.142 iodepth=128 00:10:33.142 norandommap=0 00:10:33.142 numjobs=1 00:10:33.142 00:10:33.142 verify_dump=1 00:10:33.142 verify_backlog=512 00:10:33.142 verify_state_save=0 00:10:33.142 do_verify=1 00:10:33.142 verify=crc32c-intel 00:10:33.142 [job0] 00:10:33.142 filename=/dev/nvme0n1 00:10:33.142 [job1] 00:10:33.142 filename=/dev/nvme0n2 00:10:33.142 [job2] 00:10:33.142 filename=/dev/nvme0n3 00:10:33.142 [job3] 00:10:33.142 filename=/dev/nvme0n4 00:10:33.142 Could not set queue depth (nvme0n1) 00:10:33.142 Could not set queue depth (nvme0n2) 00:10:33.142 Could not set queue depth (nvme0n3) 00:10:33.142 Could not set queue depth (nvme0n4) 00:10:33.142 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.142 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.142 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.142 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.142 fio-3.35 00:10:33.142 Starting 4 threads 00:10:34.520 00:10:34.520 job0: (groupid=0, jobs=1): err= 0: pid=68866: Mon Jul 15 22:36:52 2024 00:10:34.520 read: IOPS=5464, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1006msec) 00:10:34.520 slat (usec): min=7, max=8349, avg=85.15, stdev=520.50 00:10:34.520 clat (usec): min=1717, max=25819, avg=11967.40, stdev=2649.23 00:10:34.520 lat (usec): min=5191, max=29797, avg=12052.55, stdev=2671.93 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10552], 00:10:34.520 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:10:34.520 | 70.00th=[11600], 80.00th=[12518], 90.00th=[16188], 95.00th=[17957], 00:10:34.520 | 99.00th=[18744], 99.50th=[19006], 99.90th=[23725], 99.95th=[23987], 00:10:34.520 | 99.99th=[25822] 00:10:34.520 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:34.520 slat (usec): min=7, max=12092, avg=87.04, stdev=529.49 00:10:34.520 clat (usec): min=5579, max=22208, avg=10939.13, stdev=2573.41 00:10:34.520 lat (usec): min=7283, max=22264, avg=11026.17, stdev=2548.40 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[ 6915], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:10:34.520 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:10:34.520 | 70.00th=[10552], 80.00th=[11076], 90.00th=[14877], 95.00th=[16581], 00:10:34.520 | 99.00th=[21890], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:10:34.520 | 99.99th=[22152] 00:10:34.520 bw ( KiB/s): min=20521, max=24576, per=47.69%, avg=22548.50, stdev=2867.32, samples=2 00:10:34.520 iops : min= 5130, max= 6144, avg=5637.00, stdev=717.01, samples=2 00:10:34.520 lat (msec) : 2=0.01%, 10=24.22%, 20=74.42%, 50=1.36% 00:10:34.520 cpu : usr=4.88%, sys=15.02%, ctx=247, majf=0, minf=9 00:10:34.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:34.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.520 issued rwts: total=5497,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.520 job1: (groupid=0, jobs=1): err= 0: pid=68867: Mon Jul 15 22:36:52 2024 00:10:34.520 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:34.520 slat (usec): min=10, max=13044, avg=195.99, stdev=935.34 00:10:34.520 clat (usec): min=16869, max=46511, avg=25426.22, stdev=5756.91 00:10:34.520 lat (usec): min=16895, max=46547, avg=25622.21, stdev=5834.31 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[17433], 5.00th=[18220], 10.00th=[18744], 20.00th=[20317], 00:10:34.520 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23725], 60.00th=[25035], 00:10:34.520 | 70.00th=[27395], 80.00th=[31589], 90.00th=[35390], 95.00th=[36439], 00:10:34.520 | 99.00th=[38536], 99.50th=[41157], 99.90th=[44303], 99.95th=[45876], 00:10:34.520 | 99.99th=[46400] 00:10:34.520 write: IOPS=2807, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1004msec); 0 zone resets 00:10:34.520 slat (usec): min=10, max=20994, avg=167.90, stdev=1048.64 00:10:34.520 clat (usec): min=1681, max=64890, avg=21620.66, stdev=9344.82 00:10:34.520 lat (usec): min=7688, max=64944, avg=21788.57, stdev=9434.61 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[ 8586], 5.00th=[12518], 10.00th=[13566], 20.00th=[14615], 00:10:34.520 | 30.00th=[15926], 40.00th=[17433], 50.00th=[18482], 60.00th=[20317], 00:10:34.520 | 70.00th=[22938], 80.00th=[25035], 90.00th=[41157], 95.00th=[41681], 00:10:34.520 | 99.00th=[51119], 99.50th=[51119], 99.90th=[54789], 99.95th=[55837], 00:10:34.520 | 99.99th=[64750] 00:10:34.520 bw ( KiB/s): min= 8904, max=12624, per=22.77%, avg=10764.00, stdev=2630.44, samples=2 00:10:34.520 iops : min= 2226, max= 3156, avg=2691.00, stdev=657.61, samples=2 00:10:34.520 lat (msec) : 2=0.02%, 10=0.78%, 20=38.85%, 50=59.45%, 100=0.89% 00:10:34.520 cpu : usr=2.99%, sys=8.67%, ctx=192, majf=0, minf=9 00:10:34.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:34.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.520 issued rwts: total=2560,2819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.520 job2: (groupid=0, jobs=1): err= 0: pid=68868: Mon Jul 15 22:36:52 2024 00:10:34.520 read: IOPS=1405, BW=5621KiB/s (5756kB/s)(5672KiB/1009msec) 00:10:34.520 slat (usec): min=11, max=31420, avg=389.60, stdev=1796.74 00:10:34.520 clat (usec): min=7414, max=77019, avg=48915.04, stdev=12503.23 00:10:34.520 lat (usec): min=14620, max=77036, avg=49304.64, stdev=12529.56 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[24511], 5.00th=[33162], 10.00th=[33817], 20.00th=[34866], 00:10:34.520 | 30.00th=[41157], 40.00th=[44303], 50.00th=[49021], 60.00th=[52167], 00:10:34.520 | 70.00th=[55313], 80.00th=[61604], 90.00th=[65274], 95.00th=[69731], 00:10:34.520 | 99.00th=[74974], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:10:34.520 | 99.99th=[77071] 00:10:34.520 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:10:34.520 slat (usec): min=7, max=22931, avg=281.99, stdev=1457.56 00:10:34.520 clat (usec): min=15081, max=82275, avg=38077.08, stdev=12208.25 00:10:34.520 lat (usec): min=21246, max=82301, avg=38359.07, stdev=12221.58 00:10:34.520 clat percentiles (usec): 00:10:34.520 | 1.00th=[22414], 5.00th=[25822], 10.00th=[26346], 20.00th=[27395], 00:10:34.520 | 30.00th=[29230], 40.00th=[30802], 50.00th=[36439], 60.00th=[39060], 00:10:34.520 | 70.00th=[42206], 80.00th=[47973], 90.00th=[52167], 95.00th=[58459], 00:10:34.520 | 99.00th=[78119], 99.50th=[80217], 99.90th=[81265], 99.95th=[82314], 00:10:34.520 | 99.99th=[82314] 00:10:34.520 bw ( KiB/s): min= 4096, max= 8208, per=13.01%, avg=6152.00, stdev=2907.62, samples=2 00:10:34.520 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:10:34.520 lat (msec) : 10=0.03%, 20=0.34%, 50=68.79%, 100=30.84% 00:10:34.520 cpu : usr=1.79%, sys=5.26%, ctx=331, majf=0, minf=13 00:10:34.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:10:34.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.520 issued rwts: total=1418,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.521 job3: (groupid=0, jobs=1): err= 0: pid=68869: Mon Jul 15 22:36:52 2024 00:10:34.521 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:10:34.521 slat (usec): min=4, max=29346, avg=338.44, stdev=1678.02 00:10:34.521 clat (usec): min=3948, max=83342, avg=43537.15, stdev=19478.67 00:10:34.521 lat (usec): min=4003, max=83388, avg=43875.59, stdev=19597.79 00:10:34.521 clat percentiles (usec): 00:10:34.521 | 1.00th=[11600], 5.00th=[17957], 10.00th=[19006], 20.00th=[20317], 00:10:34.521 | 30.00th=[21627], 40.00th=[43254], 50.00th=[47973], 60.00th=[52167], 00:10:34.521 | 70.00th=[55837], 80.00th=[58459], 90.00th=[69731], 95.00th=[70779], 00:10:34.521 | 99.00th=[81265], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:10:34.521 | 99.99th=[83362] 00:10:34.521 write: IOPS=1921, BW=7687KiB/s (7871kB/s)(7756KiB/1009msec); 0 zone resets 00:10:34.521 slat (usec): min=5, max=15618, avg=238.94, stdev=1030.44 00:10:34.521 clat (usec): min=3667, max=76961, avg=31442.53, stdev=15533.23 00:10:34.521 lat (usec): min=3684, max=77011, avg=31681.47, stdev=15605.80 00:10:34.521 clat percentiles (usec): 00:10:34.521 | 1.00th=[ 4817], 5.00th=[14353], 10.00th=[18482], 20.00th=[19268], 00:10:34.521 | 30.00th=[20055], 40.00th=[20841], 50.00th=[21890], 60.00th=[31327], 00:10:34.521 | 70.00th=[41157], 80.00th=[48497], 90.00th=[53740], 95.00th=[59507], 00:10:34.521 | 99.00th=[69731], 99.50th=[70779], 99.90th=[70779], 99.95th=[77071], 00:10:34.521 | 99.99th=[77071] 00:10:34.521 bw ( KiB/s): min= 5632, max= 8873, per=15.34%, avg=7252.50, stdev=2291.73, samples=2 00:10:34.521 iops : min= 1408, max= 2218, avg=1813.00, stdev=572.76, samples=2 00:10:34.521 lat (msec) : 4=0.43%, 10=1.47%, 20=23.34%, 50=45.24%, 100=29.53% 00:10:34.521 cpu : usr=2.18%, sys=5.56%, ctx=359, majf=0, minf=21 00:10:34.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:10:34.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.521 issued rwts: total=1536,1939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.521 00:10:34.521 Run status group 0 (all jobs): 00:10:34.521 READ: bw=42.6MiB/s (44.7MB/s), 5621KiB/s-21.3MiB/s (5756kB/s-22.4MB/s), io=43.0MiB (45.1MB), run=1004-1009msec 00:10:34.521 WRITE: bw=46.2MiB/s (48.4MB/s), 6089KiB/s-21.9MiB/s (6235kB/s-22.9MB/s), io=46.6MiB (48.8MB), run=1004-1009msec 00:10:34.521 00:10:34.521 Disk stats (read/write): 00:10:34.521 nvme0n1: ios=4648/4680, merge=0/0, ticks=52741/47619, in_queue=100360, util=88.06% 00:10:34.521 nvme0n2: ios=2097/2470, merge=0/0, ticks=17981/15919, in_queue=33900, util=88.56% 00:10:34.521 nvme0n3: ios=1061/1536, merge=0/0, ticks=33554/33851, in_queue=67405, util=90.20% 00:10:34.521 nvme0n4: ios=1494/1536, merge=0/0, ticks=36209/29988, in_queue=66197, util=88.14% 00:10:34.521 22:36:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:34.521 22:36:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68882 00:10:34.521 22:36:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:34.521 22:36:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:34.521 [global] 00:10:34.521 thread=1 00:10:34.521 invalidate=1 00:10:34.521 rw=read 00:10:34.521 time_based=1 00:10:34.521 runtime=10 00:10:34.521 ioengine=libaio 00:10:34.521 direct=1 00:10:34.521 bs=4096 00:10:34.521 iodepth=1 00:10:34.521 norandommap=1 00:10:34.521 numjobs=1 00:10:34.521 00:10:34.521 [job0] 00:10:34.521 filename=/dev/nvme0n1 00:10:34.521 [job1] 00:10:34.521 filename=/dev/nvme0n2 00:10:34.521 [job2] 00:10:34.521 filename=/dev/nvme0n3 00:10:34.521 [job3] 00:10:34.521 filename=/dev/nvme0n4 00:10:34.521 Could not set queue depth (nvme0n1) 00:10:34.521 Could not set queue depth (nvme0n2) 00:10:34.521 Could not set queue depth (nvme0n3) 00:10:34.521 Could not set queue depth (nvme0n4) 00:10:34.521 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.521 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.521 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.521 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.521 fio-3.35 00:10:34.521 Starting 4 threads 00:10:37.803 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:37.803 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35479552, buflen=4096 00:10:37.803 fio: pid=68935, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:37.804 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:38.061 fio: pid=68934, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.061 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44154880, buflen=4096 00:10:38.061 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.061 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:38.328 fio: pid=68932, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.328 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=40685568, buflen=4096 00:10:38.328 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.328 22:36:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:38.587 fio: pid=68933, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.587 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=52711424, buflen=4096 00:10:38.587 00:10:38.587 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68932: Mon Jul 15 22:36:56 2024 00:10:38.587 read: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(38.8MiB/3440msec) 00:10:38.587 slat (usec): min=7, max=9509, avg=19.02, stdev=164.55 00:10:38.587 clat (usec): min=147, max=7492, avg=325.78, stdev=133.72 00:10:38.587 lat (usec): min=160, max=9806, avg=344.80, stdev=213.17 00:10:38.587 clat percentiles (usec): 00:10:38.587 | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 273], 00:10:38.587 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 338], 00:10:38.587 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 412], 00:10:38.587 | 99.00th=[ 461], 99.50th=[ 519], 99.90th=[ 1221], 99.95th=[ 3523], 00:10:38.587 | 99.99th=[ 7504] 00:10:38.587 bw ( KiB/s): min= 9704, max=11920, per=24.44%, avg=11126.67, stdev=944.45, samples=6 00:10:38.587 iops : min= 2426, max= 2980, avg=2781.67, stdev=236.11, samples=6 00:10:38.587 lat (usec) : 250=10.60%, 500=88.82%, 750=0.46% 00:10:38.587 lat (msec) : 2=0.03%, 4=0.05%, 10=0.03% 00:10:38.587 cpu : usr=0.93%, sys=4.13%, ctx=9950, majf=0, minf=1 00:10:38.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 issued rwts: total=9934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.587 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68933: Mon Jul 15 22:36:56 2024 00:10:38.587 read: IOPS=3467, BW=13.5MiB/s (14.2MB/s)(50.3MiB/3712msec) 00:10:38.587 slat (usec): min=7, max=9714, avg=17.91, stdev=159.35 00:10:38.587 clat (usec): min=127, max=36538, avg=269.02, stdev=328.66 00:10:38.587 lat (usec): min=140, max=36570, avg=286.93, stdev=366.57 00:10:38.587 clat percentiles (usec): 00:10:38.587 | 1.00th=[ 149], 5.00th=[ 172], 10.00th=[ 190], 20.00th=[ 208], 00:10:38.587 | 30.00th=[ 223], 40.00th=[ 237], 50.00th=[ 260], 60.00th=[ 281], 00:10:38.587 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 375], 00:10:38.587 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 545], 99.95th=[ 742], 00:10:38.587 | 99.99th=[ 3589] 00:10:38.587 bw ( KiB/s): min=11584, max=16872, per=30.04%, avg=13674.43, stdev=2359.74, samples=7 00:10:38.587 iops : min= 2896, max= 4218, avg=3418.57, stdev=589.91, samples=7 00:10:38.587 lat (usec) : 250=46.27%, 500=53.57%, 750=0.11% 00:10:38.587 lat (msec) : 2=0.02%, 4=0.02%, 50=0.01% 00:10:38.587 cpu : usr=0.94%, sys=4.69%, ctx=12889, majf=0, minf=1 00:10:38.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 issued rwts: total=12870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.587 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68934: Mon Jul 15 22:36:56 2024 00:10:38.587 read: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(42.1MiB/3211msec) 00:10:38.587 slat (usec): min=11, max=7765, avg=20.26, stdev=95.59 00:10:38.587 clat (usec): min=155, max=2626, avg=275.69, stdev=81.01 00:10:38.587 lat (usec): min=177, max=8007, avg=295.94, stdev=125.55 00:10:38.587 clat percentiles (usec): 00:10:38.587 | 1.00th=[ 178], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 223], 00:10:38.587 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 260], 60.00th=[ 277], 00:10:38.587 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 379], 00:10:38.587 | 99.00th=[ 429], 99.50th=[ 478], 99.90th=[ 1090], 99.95th=[ 1319], 00:10:38.587 | 99.99th=[ 2474] 00:10:38.587 bw ( KiB/s): min=11800, max=15752, per=28.93%, avg=13170.67, stdev=1637.54, samples=6 00:10:38.587 iops : min= 2950, max= 3938, avg=3292.67, stdev=409.39, samples=6 00:10:38.587 lat (usec) : 250=43.87%, 500=55.68%, 750=0.19%, 1000=0.15% 00:10:38.587 lat (msec) : 2=0.06%, 4=0.05% 00:10:38.587 cpu : usr=1.18%, sys=5.55%, ctx=10788, majf=0, minf=1 00:10:38.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 issued rwts: total=10781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.587 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68935: Mon Jul 15 22:36:56 2024 00:10:38.587 read: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(33.8MiB/2928msec) 00:10:38.587 slat (usec): min=11, max=111, avg=20.20, stdev= 5.97 00:10:38.587 clat (usec): min=166, max=2631, avg=315.33, stdev=71.18 00:10:38.587 lat (usec): min=180, max=2656, avg=335.54, stdev=72.85 00:10:38.587 clat percentiles (usec): 00:10:38.587 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 249], 00:10:38.587 | 30.00th=[ 273], 40.00th=[ 302], 50.00th=[ 326], 60.00th=[ 338], 00:10:38.587 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:10:38.587 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 562], 99.95th=[ 750], 00:10:38.587 | 99.99th=[ 2638] 00:10:38.587 bw ( KiB/s): min=10200, max=13688, per=25.97%, avg=11820.80, stdev=1537.01, samples=5 00:10:38.587 iops : min= 2550, max= 3422, avg=2955.20, stdev=384.25, samples=5 00:10:38.587 lat (usec) : 250=21.06%, 500=78.71%, 750=0.17%, 1000=0.02% 00:10:38.587 lat (msec) : 4=0.02% 00:10:38.587 cpu : usr=1.16%, sys=5.60%, ctx=8665, majf=0, minf=1 00:10:38.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.587 issued rwts: total=8663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.587 00:10:38.587 Run status group 0 (all jobs): 00:10:38.587 READ: bw=44.5MiB/s (46.6MB/s), 11.3MiB/s-13.5MiB/s (11.8MB/s-14.2MB/s), io=165MiB (173MB), run=2928-3712msec 00:10:38.587 00:10:38.587 Disk stats (read/write): 00:10:38.587 nvme0n1: ios=9678/0, merge=0/0, ticks=3092/0, in_queue=3092, util=95.39% 00:10:38.587 nvme0n2: ios=12403/0, merge=0/0, ticks=3342/0, in_queue=3342, util=95.91% 00:10:38.587 nvme0n3: ios=10398/0, merge=0/0, ticks=2943/0, in_queue=2943, util=96.49% 00:10:38.587 nvme0n4: ios=8512/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.77% 00:10:38.587 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.587 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:38.845 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.845 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:39.103 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.103 22:36:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:39.361 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.361 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:39.619 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.619 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68882 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.878 nvmf hotplug test: fio failed as expected 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.878 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.446 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:40.446 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:40.446 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:40.446 22:36:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.446 rmmod nvme_tcp 00:10:40.446 rmmod nvme_fabrics 00:10:40.446 rmmod nvme_keyring 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68501 ']' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68501 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68501 ']' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68501 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68501 00:10:40.446 killing process with pid 68501 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68501' 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68501 00:10:40.446 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68501 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.705 00:10:40.705 real 0m20.033s 00:10:40.705 user 1m15.876s 00:10:40.705 sys 0m9.759s 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.705 22:36:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.705 ************************************ 00:10:40.705 END TEST nvmf_fio_target 00:10:40.705 ************************************ 00:10:40.705 22:36:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.705 22:36:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.705 22:36:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.705 22:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.705 22:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.705 ************************************ 00:10:40.705 START TEST nvmf_bdevio 00:10:40.705 ************************************ 00:10:40.705 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.964 * Looking for test storage... 00:10:40.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.964 22:36:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.964 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.965 Cannot find device "nvmf_tgt_br" 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.965 Cannot find device "nvmf_tgt_br2" 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.965 Cannot find device "nvmf_tgt_br" 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.965 Cannot find device "nvmf_tgt_br2" 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.965 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:41.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:10:41.224 00:10:41.224 --- 10.0.0.2 ping statistics --- 00:10:41.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.224 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:41.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:10:41.224 00:10:41.224 --- 10.0.0.3 ping statistics --- 00:10:41.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.224 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:41.224 00:10:41.224 --- 10.0.0.1 ping statistics --- 00:10:41.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.224 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.224 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69197 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69197 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69197 ']' 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.225 22:36:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.225 [2024-07-15 22:36:59.023058] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:41.225 [2024-07-15 22:36:59.023134] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.484 [2024-07-15 22:36:59.160106] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.484 [2024-07-15 22:36:59.312622] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.484 [2024-07-15 22:36:59.312678] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.484 [2024-07-15 22:36:59.312692] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.484 [2024-07-15 22:36:59.312703] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.484 [2024-07-15 22:36:59.312713] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.484 [2024-07-15 22:36:59.312849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.484 [2024-07-15 22:36:59.313024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.484 [2024-07-15 22:36:59.313609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:41.484 [2024-07-15 22:36:59.313620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.742 [2024-07-15 22:36:59.395420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:42.311 22:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.311 22:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:42.311 22:36:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.311 22:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.311 22:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 [2024-07-15 22:37:00.044089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 Malloc0 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.311 [2024-07-15 22:37:00.122132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:42.311 { 00:10:42.311 "params": { 00:10:42.311 "name": "Nvme$subsystem", 00:10:42.311 "trtype": "$TEST_TRANSPORT", 00:10:42.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.311 "adrfam": "ipv4", 00:10:42.311 "trsvcid": "$NVMF_PORT", 00:10:42.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.311 "hdgst": ${hdgst:-false}, 00:10:42.311 "ddgst": ${ddgst:-false} 00:10:42.311 }, 00:10:42.311 "method": "bdev_nvme_attach_controller" 00:10:42.311 } 00:10:42.311 EOF 00:10:42.311 )") 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:42.311 22:37:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:42.311 "params": { 00:10:42.311 "name": "Nvme1", 00:10:42.311 "trtype": "tcp", 00:10:42.311 "traddr": "10.0.0.2", 00:10:42.311 "adrfam": "ipv4", 00:10:42.311 "trsvcid": "4420", 00:10:42.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.311 "hdgst": false, 00:10:42.311 "ddgst": false 00:10:42.311 }, 00:10:42.311 "method": "bdev_nvme_attach_controller" 00:10:42.311 }' 00:10:42.570 [2024-07-15 22:37:00.189047] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:42.570 [2024-07-15 22:37:00.189163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69237 ] 00:10:42.570 [2024-07-15 22:37:00.336238] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.828 [2024-07-15 22:37:00.493481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.829 [2024-07-15 22:37:00.493639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.829 [2024-07-15 22:37:00.493650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.829 [2024-07-15 22:37:00.588788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:43.189 I/O targets: 00:10:43.189 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:43.189 00:10:43.189 00:10:43.189 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.189 http://cunit.sourceforge.net/ 00:10:43.189 00:10:43.189 00:10:43.189 Suite: bdevio tests on: Nvme1n1 00:10:43.189 Test: blockdev write read block ...passed 00:10:43.189 Test: blockdev write zeroes read block ...passed 00:10:43.189 Test: blockdev write zeroes read no split ...passed 00:10:43.189 Test: blockdev write zeroes read split ...passed 00:10:43.189 Test: blockdev write zeroes read split partial ...passed 00:10:43.189 Test: blockdev reset ...[2024-07-15 22:37:00.758015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:43.189 [2024-07-15 22:37:00.758153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215b730 (9): Bad file descriptor 00:10:43.189 [2024-07-15 22:37:00.772708] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:43.189 passed 00:10:43.189 Test: blockdev write read 8 blocks ...passed 00:10:43.189 Test: blockdev write read size > 128k ...passed 00:10:43.189 Test: blockdev write read invalid size ...passed 00:10:43.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.189 Test: blockdev write read max offset ...passed 00:10:43.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.189 Test: blockdev writev readv 8 blocks ...passed 00:10:43.189 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.189 Test: blockdev writev readv block ...passed 00:10:43.189 Test: blockdev writev readv size > 128k ...passed 00:10:43.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.189 Test: blockdev comparev and writev ...[2024-07-15 22:37:00.780549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.780597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.780628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.781088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.781112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.781129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.781140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.781560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.781616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.782075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.782101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.782118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.189 [2024-07-15 22:37:00.782129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.189 passed 00:10:43.189 Test: blockdev nvme passthru rw ...passed 00:10:43.189 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:37:00.783005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.189 [2024-07-15 22:37:00.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.189 [2024-07-15 22:37:00.783174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.189 [2024-07-15 22:37:00.783196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.190 [2024-07-15 22:37:00.783325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.190 [2024-07-15 22:37:00.783341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.190 [2024-07-15 22:37:00.783449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.190 [2024-07-15 22:37:00.783464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.190 passed 00:10:43.190 Test: blockdev nvme admin passthru ...passed 00:10:43.190 Test: blockdev copy ...passed 00:10:43.190 00:10:43.190 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.190 suites 1 1 n/a 0 0 00:10:43.190 tests 23 23 23 0 0 00:10:43.190 asserts 152 152 152 0 n/a 00:10:43.190 00:10:43.190 Elapsed time = 0.156 seconds 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.448 rmmod nvme_tcp 00:10:43.448 rmmod nvme_fabrics 00:10:43.448 rmmod nvme_keyring 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69197 ']' 00:10:43.448 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69197 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69197 ']' 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69197 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69197 00:10:43.449 killing process with pid 69197 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69197' 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69197 00:10:43.449 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69197 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:44.016 00:10:44.016 real 0m3.144s 00:10:44.016 user 0m10.556s 00:10:44.016 sys 0m0.924s 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.016 22:37:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.016 ************************************ 00:10:44.016 END TEST nvmf_bdevio 00:10:44.016 ************************************ 00:10:44.016 22:37:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.016 22:37:01 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:44.016 22:37:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.016 22:37:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.016 22:37:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.016 ************************************ 00:10:44.016 START TEST nvmf_auth_target 00:10:44.016 ************************************ 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:44.016 * Looking for test storage... 00:10:44.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.016 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:44.017 Cannot find device "nvmf_tgt_br" 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.017 Cannot find device "nvmf_tgt_br2" 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:44.017 Cannot find device "nvmf_tgt_br" 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:44.017 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:44.276 Cannot find device "nvmf_tgt_br2" 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.276 22:37:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:44.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:10:44.276 00:10:44.276 --- 10.0.0.2 ping statistics --- 00:10:44.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.276 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:44.276 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:44.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:10:44.535 00:10:44.535 --- 10.0.0.3 ping statistics --- 00:10:44.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.535 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:44.535 00:10:44.535 --- 10.0.0.1 ping statistics --- 00:10:44.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.535 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69413 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69413 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69413 ']' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.535 22:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69445 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:45.471 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=03c2f7e642c94e206cf0c62bd0df8e3df6b97211a659e779 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uSM 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 03c2f7e642c94e206cf0c62bd0df8e3df6b97211a659e779 0 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 03c2f7e642c94e206cf0c62bd0df8e3df6b97211a659e779 0 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=03c2f7e642c94e206cf0c62bd0df8e3df6b97211a659e779 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uSM 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uSM 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uSM 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f7e2b21b8e73f36512b960346962af78a056796a44caad4aca44421da3c5473f 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZNv 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f7e2b21b8e73f36512b960346962af78a056796a44caad4aca44421da3c5473f 3 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f7e2b21b8e73f36512b960346962af78a056796a44caad4aca44421da3c5473f 3 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f7e2b21b8e73f36512b960346962af78a056796a44caad4aca44421da3c5473f 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZNv 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZNv 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ZNv 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb99c702105bc69a6b465d3e381090d1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2UY 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb99c702105bc69a6b465d3e381090d1 1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb99c702105bc69a6b465d3e381090d1 1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb99c702105bc69a6b465d3e381090d1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2UY 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2UY 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.2UY 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6bd273953a70f27e8532d6bbe13f0fc243899367275747e7 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OfK 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6bd273953a70f27e8532d6bbe13f0fc243899367275747e7 2 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6bd273953a70f27e8532d6bbe13f0fc243899367275747e7 2 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6bd273953a70f27e8532d6bbe13f0fc243899367275747e7 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:45.729 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OfK 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OfK 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.OfK 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8bbacbb7f85eb215e0aeb504e829e22e30b00877ba0f51ab 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bdA 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8bbacbb7f85eb215e0aeb504e829e22e30b00877ba0f51ab 2 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8bbacbb7f85eb215e0aeb504e829e22e30b00877ba0f51ab 2 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8bbacbb7f85eb215e0aeb504e829e22e30b00877ba0f51ab 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bdA 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bdA 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.bdA 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5a5f218d6254d194618d74bb43d811e8 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OPl 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5a5f218d6254d194618d74bb43d811e8 1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5a5f218d6254d194618d74bb43d811e8 1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5a5f218d6254d194618d74bb43d811e8 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OPl 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OPl 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.OPl 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01b577f1c1a3b175227c0a1f8513b32a90254768de04e035db5bc5ec1a835fbb 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1h1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01b577f1c1a3b175227c0a1f8513b32a90254768de04e035db5bc5ec1a835fbb 3 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01b577f1c1a3b175227c0a1f8513b32a90254768de04e035db5bc5ec1a835fbb 3 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01b577f1c1a3b175227c0a1f8513b32a90254768de04e035db5bc5ec1a835fbb 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1h1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1h1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.1h1 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69413 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69413 ']' 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.988 22:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.555 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.555 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:46.555 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69445 /var/tmp/host.sock 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69445 ']' 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.556 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uSM 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uSM 00:10:46.815 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uSM 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ZNv ]] 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZNv 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZNv 00:10:47.085 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZNv 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2UY 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2UY 00:10:47.386 22:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2UY 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.OfK ]] 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OfK 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OfK 00:10:47.386 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OfK 00:10:47.649 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:47.649 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bdA 00:10:47.649 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.649 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.908 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.908 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bdA 00:10:47.909 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bdA 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.OPl ]] 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OPl 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OPl 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OPl 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1h1 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.167 22:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1h1 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1h1 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:48.425 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.990 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.248 00:10:49.248 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.248 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.248 22:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.507 { 00:10:49.507 "cntlid": 1, 00:10:49.507 "qid": 0, 00:10:49.507 "state": "enabled", 00:10:49.507 "thread": "nvmf_tgt_poll_group_000", 00:10:49.507 "listen_address": { 00:10:49.507 "trtype": "TCP", 00:10:49.507 "adrfam": "IPv4", 00:10:49.507 "traddr": "10.0.0.2", 00:10:49.507 "trsvcid": "4420" 00:10:49.507 }, 00:10:49.507 "peer_address": { 00:10:49.507 "trtype": "TCP", 00:10:49.507 "adrfam": "IPv4", 00:10:49.507 "traddr": "10.0.0.1", 00:10:49.507 "trsvcid": "57992" 00:10:49.507 }, 00:10:49.507 "auth": { 00:10:49.507 "state": "completed", 00:10:49.507 "digest": "sha256", 00:10:49.507 "dhgroup": "null" 00:10:49.507 } 00:10:49.507 } 00:10:49.507 ]' 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.507 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.765 22:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.033 22:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.033 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.033 { 00:10:55.033 "cntlid": 3, 00:10:55.033 "qid": 0, 00:10:55.033 "state": "enabled", 00:10:55.033 "thread": "nvmf_tgt_poll_group_000", 00:10:55.033 "listen_address": { 00:10:55.033 "trtype": "TCP", 00:10:55.033 "adrfam": "IPv4", 00:10:55.033 "traddr": "10.0.0.2", 00:10:55.033 "trsvcid": "4420" 00:10:55.033 }, 00:10:55.033 "peer_address": { 00:10:55.033 "trtype": "TCP", 00:10:55.033 "adrfam": "IPv4", 00:10:55.033 "traddr": "10.0.0.1", 00:10:55.033 "trsvcid": "58012" 00:10:55.033 }, 00:10:55.033 "auth": { 00:10:55.033 "state": "completed", 00:10:55.033 "digest": "sha256", 00:10:55.033 "dhgroup": "null" 00:10:55.033 } 00:10:55.033 } 00:10:55.033 ]' 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.033 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.315 22:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.583 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.149 22:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.408 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.666 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.666 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.666 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.925 00:10:56.925 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.925 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.925 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.184 { 00:10:57.184 "cntlid": 5, 00:10:57.184 "qid": 0, 00:10:57.184 "state": "enabled", 00:10:57.184 "thread": "nvmf_tgt_poll_group_000", 00:10:57.184 "listen_address": { 00:10:57.184 "trtype": "TCP", 00:10:57.184 "adrfam": "IPv4", 00:10:57.184 "traddr": "10.0.0.2", 00:10:57.184 "trsvcid": "4420" 00:10:57.184 }, 00:10:57.184 "peer_address": { 00:10:57.184 "trtype": "TCP", 00:10:57.184 "adrfam": "IPv4", 00:10:57.184 "traddr": "10.0.0.1", 00:10:57.184 "trsvcid": "58048" 00:10:57.184 }, 00:10:57.184 "auth": { 00:10:57.184 "state": "completed", 00:10:57.184 "digest": "sha256", 00:10:57.184 "dhgroup": "null" 00:10:57.184 } 00:10:57.184 } 00:10:57.184 ]' 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:57.184 22:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.184 22:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.184 22:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.184 22:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.443 22:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:10:58.379 22:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:58.379 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.636 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.895 00:10:58.895 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.895 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.895 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.518 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.518 22:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.518 { 00:10:59.518 "cntlid": 7, 00:10:59.518 "qid": 0, 00:10:59.518 "state": "enabled", 00:10:59.518 "thread": "nvmf_tgt_poll_group_000", 00:10:59.518 "listen_address": { 00:10:59.518 "trtype": "TCP", 00:10:59.518 "adrfam": "IPv4", 00:10:59.518 "traddr": "10.0.0.2", 00:10:59.518 "trsvcid": "4420" 00:10:59.518 }, 00:10:59.518 "peer_address": { 00:10:59.518 "trtype": "TCP", 00:10:59.518 "adrfam": "IPv4", 00:10:59.518 "traddr": "10.0.0.1", 00:10:59.518 "trsvcid": "37888" 00:10:59.518 }, 00:10:59.518 "auth": { 00:10:59.518 "state": "completed", 00:10:59.518 "digest": "sha256", 00:10:59.518 "dhgroup": "null" 00:10:59.518 } 00:10:59.518 } 00:10:59.518 ]' 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.518 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.792 22:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.369 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.627 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.886 00:11:00.886 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.886 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.886 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.145 { 00:11:01.145 "cntlid": 9, 00:11:01.145 "qid": 0, 00:11:01.145 "state": "enabled", 00:11:01.145 "thread": "nvmf_tgt_poll_group_000", 00:11:01.145 "listen_address": { 00:11:01.145 "trtype": "TCP", 00:11:01.145 "adrfam": "IPv4", 00:11:01.145 "traddr": "10.0.0.2", 00:11:01.145 "trsvcid": "4420" 00:11:01.145 }, 00:11:01.145 "peer_address": { 00:11:01.145 "trtype": "TCP", 00:11:01.145 "adrfam": "IPv4", 00:11:01.145 "traddr": "10.0.0.1", 00:11:01.145 "trsvcid": "37918" 00:11:01.145 }, 00:11:01.145 "auth": { 00:11:01.145 "state": "completed", 00:11:01.145 "digest": "sha256", 00:11:01.145 "dhgroup": "ffdhe2048" 00:11:01.145 } 00:11:01.145 } 00:11:01.145 ]' 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.145 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.403 22:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.403 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.403 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.403 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.403 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.403 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.662 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:02.227 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.227 22:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:02.227 22:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.227 22:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.227 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.227 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.227 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.227 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.484 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.052 00:11:03.052 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.052 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.052 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.311 { 00:11:03.311 "cntlid": 11, 00:11:03.311 "qid": 0, 00:11:03.311 "state": "enabled", 00:11:03.311 "thread": "nvmf_tgt_poll_group_000", 00:11:03.311 "listen_address": { 00:11:03.311 "trtype": "TCP", 00:11:03.311 "adrfam": "IPv4", 00:11:03.311 "traddr": "10.0.0.2", 00:11:03.311 "trsvcid": "4420" 00:11:03.311 }, 00:11:03.311 "peer_address": { 00:11:03.311 "trtype": "TCP", 00:11:03.311 "adrfam": "IPv4", 00:11:03.311 "traddr": "10.0.0.1", 00:11:03.311 "trsvcid": "37942" 00:11:03.311 }, 00:11:03.311 "auth": { 00:11:03.311 "state": "completed", 00:11:03.311 "digest": "sha256", 00:11:03.311 "dhgroup": "ffdhe2048" 00:11:03.311 } 00:11:03.311 } 00:11:03.311 ]' 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.311 22:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.311 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.311 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.311 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.311 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.311 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.570 22:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.538 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.539 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.106 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.106 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.106 { 00:11:05.106 "cntlid": 13, 00:11:05.106 "qid": 0, 00:11:05.106 "state": "enabled", 00:11:05.106 "thread": "nvmf_tgt_poll_group_000", 00:11:05.106 "listen_address": { 00:11:05.106 "trtype": "TCP", 00:11:05.106 "adrfam": "IPv4", 00:11:05.106 "traddr": "10.0.0.2", 00:11:05.106 "trsvcid": "4420" 00:11:05.106 }, 00:11:05.106 "peer_address": { 00:11:05.106 "trtype": "TCP", 00:11:05.106 "adrfam": "IPv4", 00:11:05.106 "traddr": "10.0.0.1", 00:11:05.106 "trsvcid": "37968" 00:11:05.106 }, 00:11:05.106 "auth": { 00:11:05.106 "state": "completed", 00:11:05.106 "digest": "sha256", 00:11:05.106 "dhgroup": "ffdhe2048" 00:11:05.107 } 00:11:05.107 } 00:11:05.107 ]' 00:11:05.107 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.365 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.365 22:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.365 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:05.365 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.365 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.365 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.365 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.622 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:06.189 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.189 22:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:06.189 22:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.189 22:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.189 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.189 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.189 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.189 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:06.781 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.040 00:11:07.040 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.040 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.040 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.300 { 00:11:07.300 "cntlid": 15, 00:11:07.300 "qid": 0, 00:11:07.300 "state": "enabled", 00:11:07.300 "thread": "nvmf_tgt_poll_group_000", 00:11:07.300 "listen_address": { 00:11:07.300 "trtype": "TCP", 00:11:07.300 "adrfam": "IPv4", 00:11:07.300 "traddr": "10.0.0.2", 00:11:07.300 "trsvcid": "4420" 00:11:07.300 }, 00:11:07.300 "peer_address": { 00:11:07.300 "trtype": "TCP", 00:11:07.300 "adrfam": "IPv4", 00:11:07.300 "traddr": "10.0.0.1", 00:11:07.300 "trsvcid": "38002" 00:11:07.300 }, 00:11:07.300 "auth": { 00:11:07.300 "state": "completed", 00:11:07.300 "digest": "sha256", 00:11:07.300 "dhgroup": "ffdhe2048" 00:11:07.300 } 00:11:07.300 } 00:11:07.300 ]' 00:11:07.300 22:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.300 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.867 22:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.435 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.694 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.007 00:11:09.007 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.007 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.007 22:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.266 { 00:11:09.266 "cntlid": 17, 00:11:09.266 "qid": 0, 00:11:09.266 "state": "enabled", 00:11:09.266 "thread": "nvmf_tgt_poll_group_000", 00:11:09.266 "listen_address": { 00:11:09.266 "trtype": "TCP", 00:11:09.266 "adrfam": "IPv4", 00:11:09.266 "traddr": "10.0.0.2", 00:11:09.266 "trsvcid": "4420" 00:11:09.266 }, 00:11:09.266 "peer_address": { 00:11:09.266 "trtype": "TCP", 00:11:09.266 "adrfam": "IPv4", 00:11:09.266 "traddr": "10.0.0.1", 00:11:09.266 "trsvcid": "48066" 00:11:09.266 }, 00:11:09.266 "auth": { 00:11:09.266 "state": "completed", 00:11:09.266 "digest": "sha256", 00:11:09.266 "dhgroup": "ffdhe3072" 00:11:09.266 } 00:11:09.266 } 00:11:09.266 ]' 00:11:09.266 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.525 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.784 22:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:10.352 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.352 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.353 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.612 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.871 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.129 22:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.130 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.130 { 00:11:11.130 "cntlid": 19, 00:11:11.130 "qid": 0, 00:11:11.130 "state": "enabled", 00:11:11.130 "thread": "nvmf_tgt_poll_group_000", 00:11:11.130 "listen_address": { 00:11:11.130 "trtype": "TCP", 00:11:11.130 "adrfam": "IPv4", 00:11:11.130 "traddr": "10.0.0.2", 00:11:11.130 "trsvcid": "4420" 00:11:11.130 }, 00:11:11.130 "peer_address": { 00:11:11.130 "trtype": "TCP", 00:11:11.130 "adrfam": "IPv4", 00:11:11.130 "traddr": "10.0.0.1", 00:11:11.130 "trsvcid": "48094" 00:11:11.130 }, 00:11:11.130 "auth": { 00:11:11.130 "state": "completed", 00:11:11.130 "digest": "sha256", 00:11:11.130 "dhgroup": "ffdhe3072" 00:11:11.130 } 00:11:11.130 } 00:11:11.130 ]' 00:11:11.388 22:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.388 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.647 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:12.215 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.215 22:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:12.215 22:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.215 22:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.215 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.215 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.215 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.215 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.473 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.732 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.732 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.732 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.992 00:11:12.992 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.992 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.992 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.251 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.251 { 00:11:13.251 "cntlid": 21, 00:11:13.251 "qid": 0, 00:11:13.251 "state": "enabled", 00:11:13.252 "thread": "nvmf_tgt_poll_group_000", 00:11:13.252 "listen_address": { 00:11:13.252 "trtype": "TCP", 00:11:13.252 "adrfam": "IPv4", 00:11:13.252 "traddr": "10.0.0.2", 00:11:13.252 "trsvcid": "4420" 00:11:13.252 }, 00:11:13.252 "peer_address": { 00:11:13.252 "trtype": "TCP", 00:11:13.252 "adrfam": "IPv4", 00:11:13.252 "traddr": "10.0.0.1", 00:11:13.252 "trsvcid": "48118" 00:11:13.252 }, 00:11:13.252 "auth": { 00:11:13.252 "state": "completed", 00:11:13.252 "digest": "sha256", 00:11:13.252 "dhgroup": "ffdhe3072" 00:11:13.252 } 00:11:13.252 } 00:11:13.252 ]' 00:11:13.252 22:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.252 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.252 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.252 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:13.252 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.529 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.529 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.529 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.787 22:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.355 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:14.614 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.182 00:11:15.182 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.182 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.182 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.182 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.182 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.182 22:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.182 22:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.441 { 00:11:15.441 "cntlid": 23, 00:11:15.441 "qid": 0, 00:11:15.441 "state": "enabled", 00:11:15.441 "thread": "nvmf_tgt_poll_group_000", 00:11:15.441 "listen_address": { 00:11:15.441 "trtype": "TCP", 00:11:15.441 "adrfam": "IPv4", 00:11:15.441 "traddr": "10.0.0.2", 00:11:15.441 "trsvcid": "4420" 00:11:15.441 }, 00:11:15.441 "peer_address": { 00:11:15.441 "trtype": "TCP", 00:11:15.441 "adrfam": "IPv4", 00:11:15.441 "traddr": "10.0.0.1", 00:11:15.441 "trsvcid": "48138" 00:11:15.441 }, 00:11:15.441 "auth": { 00:11:15.441 "state": "completed", 00:11:15.441 "digest": "sha256", 00:11:15.441 "dhgroup": "ffdhe3072" 00:11:15.441 } 00:11:15.441 } 00:11:15.441 ]' 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.441 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.698 22:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:16.263 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.523 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.782 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.041 00:11:17.041 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.041 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.041 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.311 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.311 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.311 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.311 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.311 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.584 { 00:11:17.584 "cntlid": 25, 00:11:17.584 "qid": 0, 00:11:17.584 "state": "enabled", 00:11:17.584 "thread": "nvmf_tgt_poll_group_000", 00:11:17.584 "listen_address": { 00:11:17.584 "trtype": "TCP", 00:11:17.584 "adrfam": "IPv4", 00:11:17.584 "traddr": "10.0.0.2", 00:11:17.584 "trsvcid": "4420" 00:11:17.584 }, 00:11:17.584 "peer_address": { 00:11:17.584 "trtype": "TCP", 00:11:17.584 "adrfam": "IPv4", 00:11:17.584 "traddr": "10.0.0.1", 00:11:17.584 "trsvcid": "48156" 00:11:17.584 }, 00:11:17.584 "auth": { 00:11:17.584 "state": "completed", 00:11:17.584 "digest": "sha256", 00:11:17.584 "dhgroup": "ffdhe4096" 00:11:17.584 } 00:11:17.584 } 00:11:17.584 ]' 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.584 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.842 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.777 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.342 00:11:19.342 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.342 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.342 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.600 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.600 { 00:11:19.600 "cntlid": 27, 00:11:19.600 "qid": 0, 00:11:19.600 "state": "enabled", 00:11:19.600 "thread": "nvmf_tgt_poll_group_000", 00:11:19.600 "listen_address": { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 }, 00:11:19.600 "peer_address": { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.1", 00:11:19.600 "trsvcid": "55224" 00:11:19.600 }, 00:11:19.600 "auth": { 00:11:19.600 "state": "completed", 00:11:19.600 "digest": "sha256", 00:11:19.600 "dhgroup": "ffdhe4096" 00:11:19.600 } 00:11:19.600 } 00:11:19.600 ]' 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.601 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.859 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.805 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.064 00:11:21.064 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.064 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.064 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.323 { 00:11:21.323 "cntlid": 29, 00:11:21.323 "qid": 0, 00:11:21.323 "state": "enabled", 00:11:21.323 "thread": "nvmf_tgt_poll_group_000", 00:11:21.323 "listen_address": { 00:11:21.323 "trtype": "TCP", 00:11:21.323 "adrfam": "IPv4", 00:11:21.323 "traddr": "10.0.0.2", 00:11:21.323 "trsvcid": "4420" 00:11:21.323 }, 00:11:21.323 "peer_address": { 00:11:21.323 "trtype": "TCP", 00:11:21.323 "adrfam": "IPv4", 00:11:21.323 "traddr": "10.0.0.1", 00:11:21.323 "trsvcid": "55260" 00:11:21.323 }, 00:11:21.323 "auth": { 00:11:21.323 "state": "completed", 00:11:21.323 "digest": "sha256", 00:11:21.323 "dhgroup": "ffdhe4096" 00:11:21.323 } 00:11:21.323 } 00:11:21.323 ]' 00:11:21.323 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.582 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.841 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:22.408 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.666 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.925 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.925 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:22.925 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.183 00:11:23.183 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.183 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.183 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.443 { 00:11:23.443 "cntlid": 31, 00:11:23.443 "qid": 0, 00:11:23.443 "state": "enabled", 00:11:23.443 "thread": "nvmf_tgt_poll_group_000", 00:11:23.443 "listen_address": { 00:11:23.443 "trtype": "TCP", 00:11:23.443 "adrfam": "IPv4", 00:11:23.443 "traddr": "10.0.0.2", 00:11:23.443 "trsvcid": "4420" 00:11:23.443 }, 00:11:23.443 "peer_address": { 00:11:23.443 "trtype": "TCP", 00:11:23.443 "adrfam": "IPv4", 00:11:23.443 "traddr": "10.0.0.1", 00:11:23.443 "trsvcid": "55284" 00:11:23.443 }, 00:11:23.443 "auth": { 00:11:23.443 "state": "completed", 00:11:23.443 "digest": "sha256", 00:11:23.443 "dhgroup": "ffdhe4096" 00:11:23.443 } 00:11:23.443 } 00:11:23.443 ]' 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.443 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.727 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.663 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.922 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.923 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.182 00:11:25.441 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.441 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.441 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.699 { 00:11:25.699 "cntlid": 33, 00:11:25.699 "qid": 0, 00:11:25.699 "state": "enabled", 00:11:25.699 "thread": "nvmf_tgt_poll_group_000", 00:11:25.699 "listen_address": { 00:11:25.699 "trtype": "TCP", 00:11:25.699 "adrfam": "IPv4", 00:11:25.699 "traddr": "10.0.0.2", 00:11:25.699 "trsvcid": "4420" 00:11:25.699 }, 00:11:25.699 "peer_address": { 00:11:25.699 "trtype": "TCP", 00:11:25.699 "adrfam": "IPv4", 00:11:25.699 "traddr": "10.0.0.1", 00:11:25.699 "trsvcid": "55300" 00:11:25.699 }, 00:11:25.699 "auth": { 00:11:25.699 "state": "completed", 00:11:25.699 "digest": "sha256", 00:11:25.699 "dhgroup": "ffdhe6144" 00:11:25.699 } 00:11:25.699 } 00:11:25.699 ]' 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.699 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.957 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.902 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.469 00:11:27.469 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.469 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.469 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.728 { 00:11:27.728 "cntlid": 35, 00:11:27.728 "qid": 0, 00:11:27.728 "state": "enabled", 00:11:27.728 "thread": "nvmf_tgt_poll_group_000", 00:11:27.728 "listen_address": { 00:11:27.728 "trtype": "TCP", 00:11:27.728 "adrfam": "IPv4", 00:11:27.728 "traddr": "10.0.0.2", 00:11:27.728 "trsvcid": "4420" 00:11:27.728 }, 00:11:27.728 "peer_address": { 00:11:27.728 "trtype": "TCP", 00:11:27.728 "adrfam": "IPv4", 00:11:27.728 "traddr": "10.0.0.1", 00:11:27.728 "trsvcid": "55326" 00:11:27.728 }, 00:11:27.728 "auth": { 00:11:27.728 "state": "completed", 00:11:27.728 "digest": "sha256", 00:11:27.728 "dhgroup": "ffdhe6144" 00:11:27.728 } 00:11:27.728 } 00:11:27.728 ]' 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.728 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.295 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:28.863 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.122 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.689 00:11:29.689 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.689 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.689 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.948 { 00:11:29.948 "cntlid": 37, 00:11:29.948 "qid": 0, 00:11:29.948 "state": "enabled", 00:11:29.948 "thread": "nvmf_tgt_poll_group_000", 00:11:29.948 "listen_address": { 00:11:29.948 "trtype": "TCP", 00:11:29.948 "adrfam": "IPv4", 00:11:29.948 "traddr": "10.0.0.2", 00:11:29.948 "trsvcid": "4420" 00:11:29.948 }, 00:11:29.948 "peer_address": { 00:11:29.948 "trtype": "TCP", 00:11:29.948 "adrfam": "IPv4", 00:11:29.948 "traddr": "10.0.0.1", 00:11:29.948 "trsvcid": "57672" 00:11:29.948 }, 00:11:29.948 "auth": { 00:11:29.948 "state": "completed", 00:11:29.948 "digest": "sha256", 00:11:29.948 "dhgroup": "ffdhe6144" 00:11:29.948 } 00:11:29.948 } 00:11:29.948 ]' 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.948 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.207 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:31.142 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.142 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:31.142 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.143 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.143 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.143 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.143 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:31.143 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:31.401 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.401 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.660 00:11:31.919 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.919 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.919 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.178 { 00:11:32.178 "cntlid": 39, 00:11:32.178 "qid": 0, 00:11:32.178 "state": "enabled", 00:11:32.178 "thread": "nvmf_tgt_poll_group_000", 00:11:32.178 "listen_address": { 00:11:32.178 "trtype": "TCP", 00:11:32.178 "adrfam": "IPv4", 00:11:32.178 "traddr": "10.0.0.2", 00:11:32.178 "trsvcid": "4420" 00:11:32.178 }, 00:11:32.178 "peer_address": { 00:11:32.178 "trtype": "TCP", 00:11:32.178 "adrfam": "IPv4", 00:11:32.178 "traddr": "10.0.0.1", 00:11:32.178 "trsvcid": "57696" 00:11:32.178 }, 00:11:32.178 "auth": { 00:11:32.178 "state": "completed", 00:11:32.178 "digest": "sha256", 00:11:32.178 "dhgroup": "ffdhe6144" 00:11:32.178 } 00:11:32.178 } 00:11:32.178 ]' 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.178 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.745 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.313 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.572 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.140 00:11:34.140 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.140 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.140 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.399 { 00:11:34.399 "cntlid": 41, 00:11:34.399 "qid": 0, 00:11:34.399 "state": "enabled", 00:11:34.399 "thread": "nvmf_tgt_poll_group_000", 00:11:34.399 "listen_address": { 00:11:34.399 "trtype": "TCP", 00:11:34.399 "adrfam": "IPv4", 00:11:34.399 "traddr": "10.0.0.2", 00:11:34.399 "trsvcid": "4420" 00:11:34.399 }, 00:11:34.399 "peer_address": { 00:11:34.399 "trtype": "TCP", 00:11:34.399 "adrfam": "IPv4", 00:11:34.399 "traddr": "10.0.0.1", 00:11:34.399 "trsvcid": "57712" 00:11:34.399 }, 00:11:34.399 "auth": { 00:11:34.399 "state": "completed", 00:11:34.399 "digest": "sha256", 00:11:34.399 "dhgroup": "ffdhe8192" 00:11:34.399 } 00:11:34.399 } 00:11:34.399 ]' 00:11:34.399 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.662 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.927 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.504 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.762 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.763 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.837 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.837 { 00:11:36.837 "cntlid": 43, 00:11:36.837 "qid": 0, 00:11:36.837 "state": "enabled", 00:11:36.837 "thread": "nvmf_tgt_poll_group_000", 00:11:36.837 "listen_address": { 00:11:36.837 "trtype": "TCP", 00:11:36.837 "adrfam": "IPv4", 00:11:36.837 "traddr": "10.0.0.2", 00:11:36.837 "trsvcid": "4420" 00:11:36.837 }, 00:11:36.837 "peer_address": { 00:11:36.837 "trtype": "TCP", 00:11:36.837 "adrfam": "IPv4", 00:11:36.837 "traddr": "10.0.0.1", 00:11:36.837 "trsvcid": "57740" 00:11:36.837 }, 00:11:36.837 "auth": { 00:11:36.837 "state": "completed", 00:11:36.837 "digest": "sha256", 00:11:36.837 "dhgroup": "ffdhe8192" 00:11:36.837 } 00:11:36.837 } 00:11:36.837 ]' 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.837 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.094 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.094 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.094 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.351 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.916 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.183 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.120 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.120 { 00:11:39.120 "cntlid": 45, 00:11:39.120 "qid": 0, 00:11:39.120 "state": "enabled", 00:11:39.120 "thread": "nvmf_tgt_poll_group_000", 00:11:39.120 "listen_address": { 00:11:39.120 "trtype": "TCP", 00:11:39.120 "adrfam": "IPv4", 00:11:39.120 "traddr": "10.0.0.2", 00:11:39.120 "trsvcid": "4420" 00:11:39.120 }, 00:11:39.120 "peer_address": { 00:11:39.120 "trtype": "TCP", 00:11:39.120 "adrfam": "IPv4", 00:11:39.120 "traddr": "10.0.0.1", 00:11:39.120 "trsvcid": "57768" 00:11:39.120 }, 00:11:39.120 "auth": { 00:11:39.120 "state": "completed", 00:11:39.120 "digest": "sha256", 00:11:39.120 "dhgroup": "ffdhe8192" 00:11:39.120 } 00:11:39.120 } 00:11:39.120 ]' 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.120 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.378 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:39.378 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.378 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.378 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.378 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.636 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:40.202 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.460 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.026 00:11:41.026 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.026 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.026 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.592 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.592 { 00:11:41.592 "cntlid": 47, 00:11:41.592 "qid": 0, 00:11:41.592 "state": "enabled", 00:11:41.592 "thread": "nvmf_tgt_poll_group_000", 00:11:41.592 "listen_address": { 00:11:41.592 "trtype": "TCP", 00:11:41.592 "adrfam": "IPv4", 00:11:41.592 "traddr": "10.0.0.2", 00:11:41.592 "trsvcid": "4420" 00:11:41.592 }, 00:11:41.592 "peer_address": { 00:11:41.592 "trtype": "TCP", 00:11:41.592 "adrfam": "IPv4", 00:11:41.592 "traddr": "10.0.0.1", 00:11:41.593 "trsvcid": "32936" 00:11:41.593 }, 00:11:41.593 "auth": { 00:11:41.593 "state": "completed", 00:11:41.593 "digest": "sha256", 00:11:41.593 "dhgroup": "ffdhe8192" 00:11:41.593 } 00:11:41.593 } 00:11:41.593 ]' 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.593 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.851 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.787 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.788 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.353 00:11:43.353 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.381 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.381 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.640 { 00:11:43.640 "cntlid": 49, 00:11:43.640 "qid": 0, 00:11:43.640 "state": "enabled", 00:11:43.640 "thread": "nvmf_tgt_poll_group_000", 00:11:43.640 "listen_address": { 00:11:43.640 "trtype": "TCP", 00:11:43.640 "adrfam": "IPv4", 00:11:43.640 "traddr": "10.0.0.2", 00:11:43.640 "trsvcid": "4420" 00:11:43.640 }, 00:11:43.640 "peer_address": { 00:11:43.640 "trtype": "TCP", 00:11:43.640 "adrfam": "IPv4", 00:11:43.640 "traddr": "10.0.0.1", 00:11:43.640 "trsvcid": "32976" 00:11:43.640 }, 00:11:43.640 "auth": { 00:11:43.640 "state": "completed", 00:11:43.640 "digest": "sha384", 00:11:43.640 "dhgroup": "null" 00:11:43.640 } 00:11:43.640 } 00:11:43.640 ]' 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.640 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.897 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.832 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.091 00:11:45.091 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.091 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.091 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.350 { 00:11:45.350 "cntlid": 51, 00:11:45.350 "qid": 0, 00:11:45.350 "state": "enabled", 00:11:45.350 "thread": "nvmf_tgt_poll_group_000", 00:11:45.350 "listen_address": { 00:11:45.350 "trtype": "TCP", 00:11:45.350 "adrfam": "IPv4", 00:11:45.350 "traddr": "10.0.0.2", 00:11:45.350 "trsvcid": "4420" 00:11:45.350 }, 00:11:45.350 "peer_address": { 00:11:45.350 "trtype": "TCP", 00:11:45.350 "adrfam": "IPv4", 00:11:45.350 "traddr": "10.0.0.1", 00:11:45.350 "trsvcid": "33002" 00:11:45.350 }, 00:11:45.350 "auth": { 00:11:45.350 "state": "completed", 00:11:45.350 "digest": "sha384", 00:11:45.350 "dhgroup": "null" 00:11:45.350 } 00:11:45.350 } 00:11:45.350 ]' 00:11:45.350 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.608 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.865 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.800 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.060 00:11:47.319 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.319 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.319 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.319 { 00:11:47.319 "cntlid": 53, 00:11:47.319 "qid": 0, 00:11:47.319 "state": "enabled", 00:11:47.319 "thread": "nvmf_tgt_poll_group_000", 00:11:47.319 "listen_address": { 00:11:47.319 "trtype": "TCP", 00:11:47.319 "adrfam": "IPv4", 00:11:47.319 "traddr": "10.0.0.2", 00:11:47.319 "trsvcid": "4420" 00:11:47.319 }, 00:11:47.319 "peer_address": { 00:11:47.319 "trtype": "TCP", 00:11:47.319 "adrfam": "IPv4", 00:11:47.319 "traddr": "10.0.0.1", 00:11:47.319 "trsvcid": "33026" 00:11:47.319 }, 00:11:47.319 "auth": { 00:11:47.319 "state": "completed", 00:11:47.319 "digest": "sha384", 00:11:47.319 "dhgroup": "null" 00:11:47.319 } 00:11:47.319 } 00:11:47.319 ]' 00:11:47.319 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.577 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.836 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.774 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.344 00:11:49.344 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.344 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.344 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.613 { 00:11:49.613 "cntlid": 55, 00:11:49.613 "qid": 0, 00:11:49.613 "state": "enabled", 00:11:49.613 "thread": "nvmf_tgt_poll_group_000", 00:11:49.613 "listen_address": { 00:11:49.613 "trtype": "TCP", 00:11:49.613 "adrfam": "IPv4", 00:11:49.613 "traddr": "10.0.0.2", 00:11:49.613 "trsvcid": "4420" 00:11:49.613 }, 00:11:49.613 "peer_address": { 00:11:49.613 "trtype": "TCP", 00:11:49.613 "adrfam": "IPv4", 00:11:49.613 "traddr": "10.0.0.1", 00:11:49.613 "trsvcid": "37238" 00:11:49.613 }, 00:11:49.613 "auth": { 00:11:49.613 "state": "completed", 00:11:49.613 "digest": "sha384", 00:11:49.613 "dhgroup": "null" 00:11:49.613 } 00:11:49.613 } 00:11:49.613 ]' 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.613 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.889 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.825 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.084 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.342 00:11:51.342 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.342 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.342 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.601 { 00:11:51.601 "cntlid": 57, 00:11:51.601 "qid": 0, 00:11:51.601 "state": "enabled", 00:11:51.601 "thread": "nvmf_tgt_poll_group_000", 00:11:51.601 "listen_address": { 00:11:51.601 "trtype": "TCP", 00:11:51.601 "adrfam": "IPv4", 00:11:51.601 "traddr": "10.0.0.2", 00:11:51.601 "trsvcid": "4420" 00:11:51.601 }, 00:11:51.601 "peer_address": { 00:11:51.601 "trtype": "TCP", 00:11:51.601 "adrfam": "IPv4", 00:11:51.601 "traddr": "10.0.0.1", 00:11:51.601 "trsvcid": "37258" 00:11:51.601 }, 00:11:51.601 "auth": { 00:11:51.601 "state": "completed", 00:11:51.601 "digest": "sha384", 00:11:51.601 "dhgroup": "ffdhe2048" 00:11:51.601 } 00:11:51.601 } 00:11:51.601 ]' 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.601 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.859 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.859 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.859 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.859 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.859 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.118 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.053 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.629 00:11:53.629 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.629 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.629 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.888 { 00:11:53.888 "cntlid": 59, 00:11:53.888 "qid": 0, 00:11:53.888 "state": "enabled", 00:11:53.888 "thread": "nvmf_tgt_poll_group_000", 00:11:53.888 "listen_address": { 00:11:53.888 "trtype": "TCP", 00:11:53.888 "adrfam": "IPv4", 00:11:53.888 "traddr": "10.0.0.2", 00:11:53.888 "trsvcid": "4420" 00:11:53.888 }, 00:11:53.888 "peer_address": { 00:11:53.888 "trtype": "TCP", 00:11:53.888 "adrfam": "IPv4", 00:11:53.888 "traddr": "10.0.0.1", 00:11:53.888 "trsvcid": "37266" 00:11:53.888 }, 00:11:53.888 "auth": { 00:11:53.888 "state": "completed", 00:11:53.888 "digest": "sha384", 00:11:53.888 "dhgroup": "ffdhe2048" 00:11:53.888 } 00:11:53.888 } 00:11:53.888 ]' 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.888 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.146 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.079 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.340 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.598 00:11:55.598 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.598 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.598 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.856 { 00:11:55.856 "cntlid": 61, 00:11:55.856 "qid": 0, 00:11:55.856 "state": "enabled", 00:11:55.856 "thread": "nvmf_tgt_poll_group_000", 00:11:55.856 "listen_address": { 00:11:55.856 "trtype": "TCP", 00:11:55.856 "adrfam": "IPv4", 00:11:55.856 "traddr": "10.0.0.2", 00:11:55.856 "trsvcid": "4420" 00:11:55.856 }, 00:11:55.856 "peer_address": { 00:11:55.856 "trtype": "TCP", 00:11:55.856 "adrfam": "IPv4", 00:11:55.856 "traddr": "10.0.0.1", 00:11:55.856 "trsvcid": "37290" 00:11:55.856 }, 00:11:55.856 "auth": { 00:11:55.856 "state": "completed", 00:11:55.856 "digest": "sha384", 00:11:55.856 "dhgroup": "ffdhe2048" 00:11:55.856 } 00:11:55.856 } 00:11:55.856 ]' 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.856 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.114 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.114 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.114 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.114 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.077 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.335 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.592 00:11:57.592 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.592 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.592 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.849 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.849 { 00:11:57.849 "cntlid": 63, 00:11:57.849 "qid": 0, 00:11:57.849 "state": "enabled", 00:11:57.850 "thread": "nvmf_tgt_poll_group_000", 00:11:57.850 "listen_address": { 00:11:57.850 "trtype": "TCP", 00:11:57.850 "adrfam": "IPv4", 00:11:57.850 "traddr": "10.0.0.2", 00:11:57.850 "trsvcid": "4420" 00:11:57.850 }, 00:11:57.850 "peer_address": { 00:11:57.850 "trtype": "TCP", 00:11:57.850 "adrfam": "IPv4", 00:11:57.850 "traddr": "10.0.0.1", 00:11:57.850 "trsvcid": "37308" 00:11:57.850 }, 00:11:57.850 "auth": { 00:11:57.850 "state": "completed", 00:11:57.850 "digest": "sha384", 00:11:57.850 "dhgroup": "ffdhe2048" 00:11:57.850 } 00:11:57.850 } 00:11:57.850 ]' 00:11:57.850 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.850 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.850 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.107 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.107 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.107 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.107 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.107 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.365 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.930 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.187 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.187 22:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.187 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.187 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.751 00:11:59.751 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.751 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.751 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.009 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.010 { 00:12:00.010 "cntlid": 65, 00:12:00.010 "qid": 0, 00:12:00.010 "state": "enabled", 00:12:00.010 "thread": "nvmf_tgt_poll_group_000", 00:12:00.010 "listen_address": { 00:12:00.010 "trtype": "TCP", 00:12:00.010 "adrfam": "IPv4", 00:12:00.010 "traddr": "10.0.0.2", 00:12:00.010 "trsvcid": "4420" 00:12:00.010 }, 00:12:00.010 "peer_address": { 00:12:00.010 "trtype": "TCP", 00:12:00.010 "adrfam": "IPv4", 00:12:00.010 "traddr": "10.0.0.1", 00:12:00.010 "trsvcid": "45872" 00:12:00.010 }, 00:12:00.010 "auth": { 00:12:00.010 "state": "completed", 00:12:00.010 "digest": "sha384", 00:12:00.010 "dhgroup": "ffdhe3072" 00:12:00.010 } 00:12:00.010 } 00:12:00.010 ]' 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.010 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.267 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.267 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.267 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.525 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:01.091 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.348 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.914 00:12:01.914 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.914 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.914 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.172 { 00:12:02.172 "cntlid": 67, 00:12:02.172 "qid": 0, 00:12:02.172 "state": "enabled", 00:12:02.172 "thread": "nvmf_tgt_poll_group_000", 00:12:02.172 "listen_address": { 00:12:02.172 "trtype": "TCP", 00:12:02.172 "adrfam": "IPv4", 00:12:02.172 "traddr": "10.0.0.2", 00:12:02.172 "trsvcid": "4420" 00:12:02.172 }, 00:12:02.172 "peer_address": { 00:12:02.172 "trtype": "TCP", 00:12:02.172 "adrfam": "IPv4", 00:12:02.172 "traddr": "10.0.0.1", 00:12:02.172 "trsvcid": "45908" 00:12:02.172 }, 00:12:02.172 "auth": { 00:12:02.172 "state": "completed", 00:12:02.172 "digest": "sha384", 00:12:02.172 "dhgroup": "ffdhe3072" 00:12:02.172 } 00:12:02.172 } 00:12:02.172 ]' 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.172 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.689 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.255 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.821 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.079 00:12:04.079 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.079 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.079 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.338 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.338 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.338 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.338 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.338 { 00:12:04.338 "cntlid": 69, 00:12:04.338 "qid": 0, 00:12:04.338 "state": "enabled", 00:12:04.338 "thread": "nvmf_tgt_poll_group_000", 00:12:04.338 "listen_address": { 00:12:04.338 "trtype": "TCP", 00:12:04.338 "adrfam": "IPv4", 00:12:04.338 "traddr": "10.0.0.2", 00:12:04.338 "trsvcid": "4420" 00:12:04.338 }, 00:12:04.338 "peer_address": { 00:12:04.338 "trtype": "TCP", 00:12:04.338 "adrfam": "IPv4", 00:12:04.338 "traddr": "10.0.0.1", 00:12:04.338 "trsvcid": "45936" 00:12:04.338 }, 00:12:04.338 "auth": { 00:12:04.338 "state": "completed", 00:12:04.338 "digest": "sha384", 00:12:04.338 "dhgroup": "ffdhe3072" 00:12:04.338 } 00:12:04.338 } 00:12:04.338 ]' 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.338 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.904 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.470 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.726 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.983 00:12:06.240 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.240 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.240 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.497 { 00:12:06.497 "cntlid": 71, 00:12:06.497 "qid": 0, 00:12:06.497 "state": "enabled", 00:12:06.497 "thread": "nvmf_tgt_poll_group_000", 00:12:06.497 "listen_address": { 00:12:06.497 "trtype": "TCP", 00:12:06.497 "adrfam": "IPv4", 00:12:06.497 "traddr": "10.0.0.2", 00:12:06.497 "trsvcid": "4420" 00:12:06.497 }, 00:12:06.497 "peer_address": { 00:12:06.497 "trtype": "TCP", 00:12:06.497 "adrfam": "IPv4", 00:12:06.497 "traddr": "10.0.0.1", 00:12:06.497 "trsvcid": "45962" 00:12:06.497 }, 00:12:06.497 "auth": { 00:12:06.497 "state": "completed", 00:12:06.497 "digest": "sha384", 00:12:06.497 "dhgroup": "ffdhe3072" 00:12:06.497 } 00:12:06.497 } 00:12:06.497 ]' 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.497 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.755 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:07.705 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.962 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.220 00:12:08.220 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.220 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.220 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.479 { 00:12:08.479 "cntlid": 73, 00:12:08.479 "qid": 0, 00:12:08.479 "state": "enabled", 00:12:08.479 "thread": "nvmf_tgt_poll_group_000", 00:12:08.479 "listen_address": { 00:12:08.479 "trtype": "TCP", 00:12:08.479 "adrfam": "IPv4", 00:12:08.479 "traddr": "10.0.0.2", 00:12:08.479 "trsvcid": "4420" 00:12:08.479 }, 00:12:08.479 "peer_address": { 00:12:08.479 "trtype": "TCP", 00:12:08.479 "adrfam": "IPv4", 00:12:08.479 "traddr": "10.0.0.1", 00:12:08.479 "trsvcid": "45990" 00:12:08.479 }, 00:12:08.479 "auth": { 00:12:08.479 "state": "completed", 00:12:08.479 "digest": "sha384", 00:12:08.479 "dhgroup": "ffdhe4096" 00:12:08.479 } 00:12:08.479 } 00:12:08.479 ]' 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.479 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.737 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.737 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.737 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.737 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.737 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.996 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:09.563 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.821 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.388 00:12:10.388 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.388 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.388 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.646 { 00:12:10.646 "cntlid": 75, 00:12:10.646 "qid": 0, 00:12:10.646 "state": "enabled", 00:12:10.646 "thread": "nvmf_tgt_poll_group_000", 00:12:10.646 "listen_address": { 00:12:10.646 "trtype": "TCP", 00:12:10.646 "adrfam": "IPv4", 00:12:10.646 "traddr": "10.0.0.2", 00:12:10.646 "trsvcid": "4420" 00:12:10.646 }, 00:12:10.646 "peer_address": { 00:12:10.646 "trtype": "TCP", 00:12:10.646 "adrfam": "IPv4", 00:12:10.646 "traddr": "10.0.0.1", 00:12:10.646 "trsvcid": "54058" 00:12:10.646 }, 00:12:10.646 "auth": { 00:12:10.646 "state": "completed", 00:12:10.646 "digest": "sha384", 00:12:10.646 "dhgroup": "ffdhe4096" 00:12:10.646 } 00:12:10.646 } 00:12:10.646 ]' 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.646 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.906 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.841 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:11.842 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.100 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.359 00:12:12.620 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.620 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.620 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.879 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.879 { 00:12:12.879 "cntlid": 77, 00:12:12.879 "qid": 0, 00:12:12.879 "state": "enabled", 00:12:12.879 "thread": "nvmf_tgt_poll_group_000", 00:12:12.879 "listen_address": { 00:12:12.879 "trtype": "TCP", 00:12:12.879 "adrfam": "IPv4", 00:12:12.879 "traddr": "10.0.0.2", 00:12:12.879 "trsvcid": "4420" 00:12:12.879 }, 00:12:12.879 "peer_address": { 00:12:12.879 "trtype": "TCP", 00:12:12.879 "adrfam": "IPv4", 00:12:12.879 "traddr": "10.0.0.1", 00:12:12.879 "trsvcid": "54088" 00:12:12.879 }, 00:12:12.879 "auth": { 00:12:12.879 "state": "completed", 00:12:12.879 "digest": "sha384", 00:12:12.880 "dhgroup": "ffdhe4096" 00:12:12.880 } 00:12:12.880 } 00:12:12.880 ]' 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.880 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.450 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.017 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.276 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.844 00:12:14.844 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.844 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.844 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.102 { 00:12:15.102 "cntlid": 79, 00:12:15.102 "qid": 0, 00:12:15.102 "state": "enabled", 00:12:15.102 "thread": "nvmf_tgt_poll_group_000", 00:12:15.102 "listen_address": { 00:12:15.102 "trtype": "TCP", 00:12:15.102 "adrfam": "IPv4", 00:12:15.102 "traddr": "10.0.0.2", 00:12:15.102 "trsvcid": "4420" 00:12:15.102 }, 00:12:15.102 "peer_address": { 00:12:15.102 "trtype": "TCP", 00:12:15.102 "adrfam": "IPv4", 00:12:15.102 "traddr": "10.0.0.1", 00:12:15.102 "trsvcid": "54102" 00:12:15.102 }, 00:12:15.102 "auth": { 00:12:15.102 "state": "completed", 00:12:15.102 "digest": "sha384", 00:12:15.102 "dhgroup": "ffdhe4096" 00:12:15.102 } 00:12:15.102 } 00:12:15.102 ]' 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.102 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.670 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.237 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.550 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.117 00:12:17.117 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.117 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.117 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.375 { 00:12:17.375 "cntlid": 81, 00:12:17.375 "qid": 0, 00:12:17.375 "state": "enabled", 00:12:17.375 "thread": "nvmf_tgt_poll_group_000", 00:12:17.375 "listen_address": { 00:12:17.375 "trtype": "TCP", 00:12:17.375 "adrfam": "IPv4", 00:12:17.375 "traddr": "10.0.0.2", 00:12:17.375 "trsvcid": "4420" 00:12:17.375 }, 00:12:17.375 "peer_address": { 00:12:17.375 "trtype": "TCP", 00:12:17.375 "adrfam": "IPv4", 00:12:17.375 "traddr": "10.0.0.1", 00:12:17.375 "trsvcid": "54144" 00:12:17.375 }, 00:12:17.375 "auth": { 00:12:17.375 "state": "completed", 00:12:17.375 "digest": "sha384", 00:12:17.375 "dhgroup": "ffdhe6144" 00:12:17.375 } 00:12:17.375 } 00:12:17.375 ]' 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.375 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.633 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.567 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.826 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.084 00:12:19.084 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.084 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.084 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.341 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.341 { 00:12:19.341 "cntlid": 83, 00:12:19.341 "qid": 0, 00:12:19.341 "state": "enabled", 00:12:19.341 "thread": "nvmf_tgt_poll_group_000", 00:12:19.341 "listen_address": { 00:12:19.341 "trtype": "TCP", 00:12:19.341 "adrfam": "IPv4", 00:12:19.341 "traddr": "10.0.0.2", 00:12:19.341 "trsvcid": "4420" 00:12:19.342 }, 00:12:19.342 "peer_address": { 00:12:19.342 "trtype": "TCP", 00:12:19.342 "adrfam": "IPv4", 00:12:19.342 "traddr": "10.0.0.1", 00:12:19.342 "trsvcid": "33204" 00:12:19.342 }, 00:12:19.342 "auth": { 00:12:19.342 "state": "completed", 00:12:19.342 "digest": "sha384", 00:12:19.342 "dhgroup": "ffdhe6144" 00:12:19.342 } 00:12:19.342 } 00:12:19.342 ]' 00:12:19.342 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.342 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.342 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.598 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.598 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.598 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.599 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.599 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.857 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.424 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.683 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.281 00:12:21.281 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.281 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.281 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.540 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.540 { 00:12:21.540 "cntlid": 85, 00:12:21.540 "qid": 0, 00:12:21.540 "state": "enabled", 00:12:21.540 "thread": "nvmf_tgt_poll_group_000", 00:12:21.540 "listen_address": { 00:12:21.540 "trtype": "TCP", 00:12:21.541 "adrfam": "IPv4", 00:12:21.541 "traddr": "10.0.0.2", 00:12:21.541 "trsvcid": "4420" 00:12:21.541 }, 00:12:21.541 "peer_address": { 00:12:21.541 "trtype": "TCP", 00:12:21.541 "adrfam": "IPv4", 00:12:21.541 "traddr": "10.0.0.1", 00:12:21.541 "trsvcid": "33232" 00:12:21.541 }, 00:12:21.541 "auth": { 00:12:21.541 "state": "completed", 00:12:21.541 "digest": "sha384", 00:12:21.541 "dhgroup": "ffdhe6144" 00:12:21.541 } 00:12:21.541 } 00:12:21.541 ]' 00:12:21.541 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.541 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.541 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.541 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.541 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.799 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.799 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.799 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.058 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.624 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.882 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.450 00:12:23.450 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.450 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.450 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.709 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.709 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.709 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.710 { 00:12:23.710 "cntlid": 87, 00:12:23.710 "qid": 0, 00:12:23.710 "state": "enabled", 00:12:23.710 "thread": "nvmf_tgt_poll_group_000", 00:12:23.710 "listen_address": { 00:12:23.710 "trtype": "TCP", 00:12:23.710 "adrfam": "IPv4", 00:12:23.710 "traddr": "10.0.0.2", 00:12:23.710 "trsvcid": "4420" 00:12:23.710 }, 00:12:23.710 "peer_address": { 00:12:23.710 "trtype": "TCP", 00:12:23.710 "adrfam": "IPv4", 00:12:23.710 "traddr": "10.0.0.1", 00:12:23.710 "trsvcid": "33262" 00:12:23.710 }, 00:12:23.710 "auth": { 00:12:23.710 "state": "completed", 00:12:23.710 "digest": "sha384", 00:12:23.710 "dhgroup": "ffdhe6144" 00:12:23.710 } 00:12:23.710 } 00:12:23.710 ]' 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.710 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.968 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.968 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.968 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.968 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.968 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.227 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.794 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.117 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.049 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.049 { 00:12:26.049 "cntlid": 89, 00:12:26.049 "qid": 0, 00:12:26.049 "state": "enabled", 00:12:26.049 "thread": "nvmf_tgt_poll_group_000", 00:12:26.049 "listen_address": { 00:12:26.049 "trtype": "TCP", 00:12:26.049 "adrfam": "IPv4", 00:12:26.049 "traddr": "10.0.0.2", 00:12:26.049 "trsvcid": "4420" 00:12:26.049 }, 00:12:26.049 "peer_address": { 00:12:26.049 "trtype": "TCP", 00:12:26.049 "adrfam": "IPv4", 00:12:26.049 "traddr": "10.0.0.1", 00:12:26.049 "trsvcid": "33284" 00:12:26.049 }, 00:12:26.049 "auth": { 00:12:26.049 "state": "completed", 00:12:26.049 "digest": "sha384", 00:12:26.049 "dhgroup": "ffdhe8192" 00:12:26.049 } 00:12:26.049 } 00:12:26.049 ]' 00:12:26.049 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.307 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.307 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.307 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.307 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.307 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.307 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.307 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.565 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.499 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.758 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.324 00:12:28.324 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.324 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.324 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.583 { 00:12:28.583 "cntlid": 91, 00:12:28.583 "qid": 0, 00:12:28.583 "state": "enabled", 00:12:28.583 "thread": "nvmf_tgt_poll_group_000", 00:12:28.583 "listen_address": { 00:12:28.583 "trtype": "TCP", 00:12:28.583 "adrfam": "IPv4", 00:12:28.583 "traddr": "10.0.0.2", 00:12:28.583 "trsvcid": "4420" 00:12:28.583 }, 00:12:28.583 "peer_address": { 00:12:28.583 "trtype": "TCP", 00:12:28.583 "adrfam": "IPv4", 00:12:28.583 "traddr": "10.0.0.1", 00:12:28.583 "trsvcid": "33304" 00:12:28.583 }, 00:12:28.583 "auth": { 00:12:28.583 "state": "completed", 00:12:28.583 "digest": "sha384", 00:12:28.583 "dhgroup": "ffdhe8192" 00:12:28.583 } 00:12:28.583 } 00:12:28.583 ]' 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.583 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.842 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.842 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.842 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.099 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.727 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.986 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.922 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.922 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.181 { 00:12:31.181 "cntlid": 93, 00:12:31.181 "qid": 0, 00:12:31.181 "state": "enabled", 00:12:31.181 "thread": "nvmf_tgt_poll_group_000", 00:12:31.181 "listen_address": { 00:12:31.181 "trtype": "TCP", 00:12:31.181 "adrfam": "IPv4", 00:12:31.181 "traddr": "10.0.0.2", 00:12:31.181 "trsvcid": "4420" 00:12:31.181 }, 00:12:31.181 "peer_address": { 00:12:31.181 "trtype": "TCP", 00:12:31.181 "adrfam": "IPv4", 00:12:31.181 "traddr": "10.0.0.1", 00:12:31.181 "trsvcid": "35212" 00:12:31.181 }, 00:12:31.181 "auth": { 00:12:31.181 "state": "completed", 00:12:31.181 "digest": "sha384", 00:12:31.181 "dhgroup": "ffdhe8192" 00:12:31.181 } 00:12:31.181 } 00:12:31.181 ]' 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.181 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.440 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.376 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.376 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.311 00:12:33.311 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.311 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.311 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.575 { 00:12:33.575 "cntlid": 95, 00:12:33.575 "qid": 0, 00:12:33.575 "state": "enabled", 00:12:33.575 "thread": "nvmf_tgt_poll_group_000", 00:12:33.575 "listen_address": { 00:12:33.575 "trtype": "TCP", 00:12:33.575 "adrfam": "IPv4", 00:12:33.575 "traddr": "10.0.0.2", 00:12:33.575 "trsvcid": "4420" 00:12:33.575 }, 00:12:33.575 "peer_address": { 00:12:33.575 "trtype": "TCP", 00:12:33.575 "adrfam": "IPv4", 00:12:33.575 "traddr": "10.0.0.1", 00:12:33.575 "trsvcid": "35246" 00:12:33.575 }, 00:12:33.575 "auth": { 00:12:33.575 "state": "completed", 00:12:33.575 "digest": "sha384", 00:12:33.575 "dhgroup": "ffdhe8192" 00:12:33.575 } 00:12:33.575 } 00:12:33.575 ]' 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.575 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.834 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.768 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.026 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.285 00:12:35.285 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.285 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.285 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.543 { 00:12:35.543 "cntlid": 97, 00:12:35.543 "qid": 0, 00:12:35.543 "state": "enabled", 00:12:35.543 "thread": "nvmf_tgt_poll_group_000", 00:12:35.543 "listen_address": { 00:12:35.543 "trtype": "TCP", 00:12:35.543 "adrfam": "IPv4", 00:12:35.543 "traddr": "10.0.0.2", 00:12:35.543 "trsvcid": "4420" 00:12:35.543 }, 00:12:35.543 "peer_address": { 00:12:35.543 "trtype": "TCP", 00:12:35.543 "adrfam": "IPv4", 00:12:35.543 "traddr": "10.0.0.1", 00:12:35.543 "trsvcid": "35258" 00:12:35.543 }, 00:12:35.543 "auth": { 00:12:35.543 "state": "completed", 00:12:35.543 "digest": "sha512", 00:12:35.543 "dhgroup": "null" 00:12:35.543 } 00:12:35.543 } 00:12:35.543 ]' 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.543 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.801 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:35.801 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.801 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.801 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.801 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.057 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.622 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.880 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.447 00:12:37.447 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.447 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.447 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.705 { 00:12:37.705 "cntlid": 99, 00:12:37.705 "qid": 0, 00:12:37.705 "state": "enabled", 00:12:37.705 "thread": "nvmf_tgt_poll_group_000", 00:12:37.705 "listen_address": { 00:12:37.705 "trtype": "TCP", 00:12:37.705 "adrfam": "IPv4", 00:12:37.705 "traddr": "10.0.0.2", 00:12:37.705 "trsvcid": "4420" 00:12:37.705 }, 00:12:37.705 "peer_address": { 00:12:37.705 "trtype": "TCP", 00:12:37.705 "adrfam": "IPv4", 00:12:37.705 "traddr": "10.0.0.1", 00:12:37.705 "trsvcid": "35276" 00:12:37.705 }, 00:12:37.705 "auth": { 00:12:37.705 "state": "completed", 00:12:37.705 "digest": "sha512", 00:12:37.705 "dhgroup": "null" 00:12:37.705 } 00:12:37.705 } 00:12:37.705 ]' 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.705 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.963 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.906 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.163 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.163 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.163 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.420 00:12:39.420 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.420 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.420 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.677 { 00:12:39.677 "cntlid": 101, 00:12:39.677 "qid": 0, 00:12:39.677 "state": "enabled", 00:12:39.677 "thread": "nvmf_tgt_poll_group_000", 00:12:39.677 "listen_address": { 00:12:39.677 "trtype": "TCP", 00:12:39.677 "adrfam": "IPv4", 00:12:39.677 "traddr": "10.0.0.2", 00:12:39.677 "trsvcid": "4420" 00:12:39.677 }, 00:12:39.677 "peer_address": { 00:12:39.677 "trtype": "TCP", 00:12:39.677 "adrfam": "IPv4", 00:12:39.677 "traddr": "10.0.0.1", 00:12:39.677 "trsvcid": "54808" 00:12:39.677 }, 00:12:39.677 "auth": { 00:12:39.677 "state": "completed", 00:12:39.677 "digest": "sha512", 00:12:39.677 "dhgroup": "null" 00:12:39.677 } 00:12:39.677 } 00:12:39.677 ]' 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.677 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.241 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:41.208 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.208 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:41.208 22:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.208 22:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 22:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.209 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.209 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.209 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.467 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.726 00:12:41.726 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.726 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.726 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.985 { 00:12:41.985 "cntlid": 103, 00:12:41.985 "qid": 0, 00:12:41.985 "state": "enabled", 00:12:41.985 "thread": "nvmf_tgt_poll_group_000", 00:12:41.985 "listen_address": { 00:12:41.985 "trtype": "TCP", 00:12:41.985 "adrfam": "IPv4", 00:12:41.985 "traddr": "10.0.0.2", 00:12:41.985 "trsvcid": "4420" 00:12:41.985 }, 00:12:41.985 "peer_address": { 00:12:41.985 "trtype": "TCP", 00:12:41.985 "adrfam": "IPv4", 00:12:41.985 "traddr": "10.0.0.1", 00:12:41.985 "trsvcid": "54832" 00:12:41.985 }, 00:12:41.985 "auth": { 00:12:41.985 "state": "completed", 00:12:41.985 "digest": "sha512", 00:12:41.985 "dhgroup": "null" 00:12:41.985 } 00:12:41.985 } 00:12:41.985 ]' 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.985 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.244 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.244 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.244 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.244 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.244 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.503 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:43.441 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:43.441 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:43.441 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.441 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.442 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.011 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.011 { 00:12:44.011 "cntlid": 105, 00:12:44.011 "qid": 0, 00:12:44.011 "state": "enabled", 00:12:44.011 "thread": "nvmf_tgt_poll_group_000", 00:12:44.011 "listen_address": { 00:12:44.011 "trtype": "TCP", 00:12:44.011 "adrfam": "IPv4", 00:12:44.011 "traddr": "10.0.0.2", 00:12:44.011 "trsvcid": "4420" 00:12:44.011 }, 00:12:44.011 "peer_address": { 00:12:44.011 "trtype": "TCP", 00:12:44.011 "adrfam": "IPv4", 00:12:44.011 "traddr": "10.0.0.1", 00:12:44.011 "trsvcid": "54864" 00:12:44.011 }, 00:12:44.011 "auth": { 00:12:44.011 "state": "completed", 00:12:44.011 "digest": "sha512", 00:12:44.011 "dhgroup": "ffdhe2048" 00:12:44.011 } 00:12:44.011 } 00:12:44.011 ]' 00:12:44.011 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.269 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.528 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.474 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.474 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:45.474 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.474 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.474 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.475 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.040 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.040 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.297 { 00:12:46.297 "cntlid": 107, 00:12:46.297 "qid": 0, 00:12:46.297 "state": "enabled", 00:12:46.297 "thread": "nvmf_tgt_poll_group_000", 00:12:46.297 "listen_address": { 00:12:46.297 "trtype": "TCP", 00:12:46.297 "adrfam": "IPv4", 00:12:46.297 "traddr": "10.0.0.2", 00:12:46.297 "trsvcid": "4420" 00:12:46.297 }, 00:12:46.297 "peer_address": { 00:12:46.297 "trtype": "TCP", 00:12:46.297 "adrfam": "IPv4", 00:12:46.297 "traddr": "10.0.0.1", 00:12:46.297 "trsvcid": "54894" 00:12:46.297 }, 00:12:46.297 "auth": { 00:12:46.297 "state": "completed", 00:12:46.297 "digest": "sha512", 00:12:46.297 "dhgroup": "ffdhe2048" 00:12:46.297 } 00:12:46.297 } 00:12:46.297 ]' 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.297 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.297 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.297 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.297 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.555 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:47.488 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.488 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.746 00:12:48.004 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.004 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.004 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.263 { 00:12:48.263 "cntlid": 109, 00:12:48.263 "qid": 0, 00:12:48.263 "state": "enabled", 00:12:48.263 "thread": "nvmf_tgt_poll_group_000", 00:12:48.263 "listen_address": { 00:12:48.263 "trtype": "TCP", 00:12:48.263 "adrfam": "IPv4", 00:12:48.263 "traddr": "10.0.0.2", 00:12:48.263 "trsvcid": "4420" 00:12:48.263 }, 00:12:48.263 "peer_address": { 00:12:48.263 "trtype": "TCP", 00:12:48.263 "adrfam": "IPv4", 00:12:48.263 "traddr": "10.0.0.1", 00:12:48.263 "trsvcid": "54918" 00:12:48.263 }, 00:12:48.263 "auth": { 00:12:48.263 "state": "completed", 00:12:48.263 "digest": "sha512", 00:12:48.263 "dhgroup": "ffdhe2048" 00:12:48.263 } 00:12:48.263 } 00:12:48.263 ]' 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.263 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.263 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.263 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.263 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.523 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:49.463 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.722 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.981 00:12:49.981 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.981 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.981 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.239 { 00:12:50.239 "cntlid": 111, 00:12:50.239 "qid": 0, 00:12:50.239 "state": "enabled", 00:12:50.239 "thread": "nvmf_tgt_poll_group_000", 00:12:50.239 "listen_address": { 00:12:50.239 "trtype": "TCP", 00:12:50.239 "adrfam": "IPv4", 00:12:50.239 "traddr": "10.0.0.2", 00:12:50.239 "trsvcid": "4420" 00:12:50.239 }, 00:12:50.239 "peer_address": { 00:12:50.239 "trtype": "TCP", 00:12:50.239 "adrfam": "IPv4", 00:12:50.239 "traddr": "10.0.0.1", 00:12:50.239 "trsvcid": "44004" 00:12:50.239 }, 00:12:50.239 "auth": { 00:12:50.239 "state": "completed", 00:12:50.239 "digest": "sha512", 00:12:50.239 "dhgroup": "ffdhe2048" 00:12:50.239 } 00:12:50.239 } 00:12:50.239 ]' 00:12:50.239 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.239 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.239 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.239 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:50.239 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.497 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.497 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.497 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.754 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:51.321 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.888 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.146 00:12:52.146 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.146 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.146 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.404 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.404 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.404 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.405 { 00:12:52.405 "cntlid": 113, 00:12:52.405 "qid": 0, 00:12:52.405 "state": "enabled", 00:12:52.405 "thread": "nvmf_tgt_poll_group_000", 00:12:52.405 "listen_address": { 00:12:52.405 "trtype": "TCP", 00:12:52.405 "adrfam": "IPv4", 00:12:52.405 "traddr": "10.0.0.2", 00:12:52.405 "trsvcid": "4420" 00:12:52.405 }, 00:12:52.405 "peer_address": { 00:12:52.405 "trtype": "TCP", 00:12:52.405 "adrfam": "IPv4", 00:12:52.405 "traddr": "10.0.0.1", 00:12:52.405 "trsvcid": "44012" 00:12:52.405 }, 00:12:52.405 "auth": { 00:12:52.405 "state": "completed", 00:12:52.405 "digest": "sha512", 00:12:52.405 "dhgroup": "ffdhe3072" 00:12:52.405 } 00:12:52.405 } 00:12:52.405 ]' 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.405 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.664 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.664 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.664 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.923 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:12:53.489 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.489 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:53.489 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.489 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.748 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.748 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.748 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:53.748 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.006 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.265 00:12:54.265 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.265 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.265 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.523 { 00:12:54.523 "cntlid": 115, 00:12:54.523 "qid": 0, 00:12:54.523 "state": "enabled", 00:12:54.523 "thread": "nvmf_tgt_poll_group_000", 00:12:54.523 "listen_address": { 00:12:54.523 "trtype": "TCP", 00:12:54.523 "adrfam": "IPv4", 00:12:54.523 "traddr": "10.0.0.2", 00:12:54.523 "trsvcid": "4420" 00:12:54.523 }, 00:12:54.523 "peer_address": { 00:12:54.523 "trtype": "TCP", 00:12:54.523 "adrfam": "IPv4", 00:12:54.523 "traddr": "10.0.0.1", 00:12:54.523 "trsvcid": "44048" 00:12:54.523 }, 00:12:54.523 "auth": { 00:12:54.523 "state": "completed", 00:12:54.523 "digest": "sha512", 00:12:54.523 "dhgroup": "ffdhe3072" 00:12:54.523 } 00:12:54.523 } 00:12:54.523 ]' 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.523 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.781 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:54.781 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.781 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.781 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.781 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.039 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:55.970 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.228 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.229 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.487 00:12:56.487 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.487 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.487 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.053 { 00:12:57.053 "cntlid": 117, 00:12:57.053 "qid": 0, 00:12:57.053 "state": "enabled", 00:12:57.053 "thread": "nvmf_tgt_poll_group_000", 00:12:57.053 "listen_address": { 00:12:57.053 "trtype": "TCP", 00:12:57.053 "adrfam": "IPv4", 00:12:57.053 "traddr": "10.0.0.2", 00:12:57.053 "trsvcid": "4420" 00:12:57.053 }, 00:12:57.053 "peer_address": { 00:12:57.053 "trtype": "TCP", 00:12:57.053 "adrfam": "IPv4", 00:12:57.053 "traddr": "10.0.0.1", 00:12:57.053 "trsvcid": "44062" 00:12:57.053 }, 00:12:57.053 "auth": { 00:12:57.053 "state": "completed", 00:12:57.053 "digest": "sha512", 00:12:57.053 "dhgroup": "ffdhe3072" 00:12:57.053 } 00:12:57.053 } 00:12:57.053 ]' 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.053 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.312 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.246 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.246 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.504 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.504 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.504 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.763 00:12:58.763 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.763 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.763 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.330 { 00:12:59.330 "cntlid": 119, 00:12:59.330 "qid": 0, 00:12:59.330 "state": "enabled", 00:12:59.330 "thread": "nvmf_tgt_poll_group_000", 00:12:59.330 "listen_address": { 00:12:59.330 "trtype": "TCP", 00:12:59.330 "adrfam": "IPv4", 00:12:59.330 "traddr": "10.0.0.2", 00:12:59.330 "trsvcid": "4420" 00:12:59.330 }, 00:12:59.330 "peer_address": { 00:12:59.330 "trtype": "TCP", 00:12:59.330 "adrfam": "IPv4", 00:12:59.330 "traddr": "10.0.0.1", 00:12:59.330 "trsvcid": "53636" 00:12:59.330 }, 00:12:59.330 "auth": { 00:12:59.330 "state": "completed", 00:12:59.330 "digest": "sha512", 00:12:59.330 "dhgroup": "ffdhe3072" 00:12:59.330 } 00:12:59.330 } 00:12:59.330 ]' 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.330 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.330 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.330 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.330 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.588 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.524 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.783 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.042 00:13:01.042 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.042 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.042 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.300 { 00:13:01.300 "cntlid": 121, 00:13:01.300 "qid": 0, 00:13:01.300 "state": "enabled", 00:13:01.300 "thread": "nvmf_tgt_poll_group_000", 00:13:01.300 "listen_address": { 00:13:01.300 "trtype": "TCP", 00:13:01.300 "adrfam": "IPv4", 00:13:01.300 "traddr": "10.0.0.2", 00:13:01.300 "trsvcid": "4420" 00:13:01.300 }, 00:13:01.300 "peer_address": { 00:13:01.300 "trtype": "TCP", 00:13:01.300 "adrfam": "IPv4", 00:13:01.300 "traddr": "10.0.0.1", 00:13:01.300 "trsvcid": "53666" 00:13:01.300 }, 00:13:01.300 "auth": { 00:13:01.300 "state": "completed", 00:13:01.300 "digest": "sha512", 00:13:01.300 "dhgroup": "ffdhe4096" 00:13:01.300 } 00:13:01.300 } 00:13:01.300 ]' 00:13:01.300 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.560 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.819 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.384 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.641 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.206 00:13:03.206 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.206 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.206 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.463 { 00:13:03.463 "cntlid": 123, 00:13:03.463 "qid": 0, 00:13:03.463 "state": "enabled", 00:13:03.463 "thread": "nvmf_tgt_poll_group_000", 00:13:03.463 "listen_address": { 00:13:03.463 "trtype": "TCP", 00:13:03.463 "adrfam": "IPv4", 00:13:03.463 "traddr": "10.0.0.2", 00:13:03.463 "trsvcid": "4420" 00:13:03.463 }, 00:13:03.463 "peer_address": { 00:13:03.463 "trtype": "TCP", 00:13:03.463 "adrfam": "IPv4", 00:13:03.463 "traddr": "10.0.0.1", 00:13:03.463 "trsvcid": "53680" 00:13:03.463 }, 00:13:03.463 "auth": { 00:13:03.463 "state": "completed", 00:13:03.463 "digest": "sha512", 00:13:03.463 "dhgroup": "ffdhe4096" 00:13:03.463 } 00:13:03.463 } 00:13:03.463 ]' 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.463 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.721 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.721 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.721 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.721 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.721 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.979 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.543 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.800 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.365 00:13:05.365 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.365 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.365 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.625 { 00:13:05.625 "cntlid": 125, 00:13:05.625 "qid": 0, 00:13:05.625 "state": "enabled", 00:13:05.625 "thread": "nvmf_tgt_poll_group_000", 00:13:05.625 "listen_address": { 00:13:05.625 "trtype": "TCP", 00:13:05.625 "adrfam": "IPv4", 00:13:05.625 "traddr": "10.0.0.2", 00:13:05.625 "trsvcid": "4420" 00:13:05.625 }, 00:13:05.625 "peer_address": { 00:13:05.625 "trtype": "TCP", 00:13:05.625 "adrfam": "IPv4", 00:13:05.625 "traddr": "10.0.0.1", 00:13:05.625 "trsvcid": "53706" 00:13:05.625 }, 00:13:05.625 "auth": { 00:13:05.625 "state": "completed", 00:13:05.625 "digest": "sha512", 00:13:05.625 "dhgroup": "ffdhe4096" 00:13:05.625 } 00:13:05.625 } 00:13:05.625 ]' 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.625 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.883 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.814 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.395 00:13:07.395 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.395 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.395 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.395 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.395 { 00:13:07.395 "cntlid": 127, 00:13:07.395 "qid": 0, 00:13:07.395 "state": "enabled", 00:13:07.395 "thread": "nvmf_tgt_poll_group_000", 00:13:07.395 "listen_address": { 00:13:07.395 "trtype": "TCP", 00:13:07.395 "adrfam": "IPv4", 00:13:07.395 "traddr": "10.0.0.2", 00:13:07.395 "trsvcid": "4420" 00:13:07.395 }, 00:13:07.395 "peer_address": { 00:13:07.395 "trtype": "TCP", 00:13:07.395 "adrfam": "IPv4", 00:13:07.395 "traddr": "10.0.0.1", 00:13:07.395 "trsvcid": "53718" 00:13:07.395 }, 00:13:07.395 "auth": { 00:13:07.395 "state": "completed", 00:13:07.395 "digest": "sha512", 00:13:07.395 "dhgroup": "ffdhe4096" 00:13:07.395 } 00:13:07.395 } 00:13:07.395 ]' 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.669 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.927 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:08.860 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.118 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.375 00:13:09.375 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.376 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.376 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.941 { 00:13:09.941 "cntlid": 129, 00:13:09.941 "qid": 0, 00:13:09.941 "state": "enabled", 00:13:09.941 "thread": "nvmf_tgt_poll_group_000", 00:13:09.941 "listen_address": { 00:13:09.941 "trtype": "TCP", 00:13:09.941 "adrfam": "IPv4", 00:13:09.941 "traddr": "10.0.0.2", 00:13:09.941 "trsvcid": "4420" 00:13:09.941 }, 00:13:09.941 "peer_address": { 00:13:09.941 "trtype": "TCP", 00:13:09.941 "adrfam": "IPv4", 00:13:09.941 "traddr": "10.0.0.1", 00:13:09.941 "trsvcid": "47254" 00:13:09.941 }, 00:13:09.941 "auth": { 00:13:09.941 "state": "completed", 00:13:09.941 "digest": "sha512", 00:13:09.941 "dhgroup": "ffdhe6144" 00:13:09.941 } 00:13:09.941 } 00:13:09.941 ]' 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.941 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.199 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:13:11.134 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.135 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.393 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:11.393 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.393 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:11.393 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:11.393 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.394 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.962 00:13:11.962 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.962 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.962 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.221 { 00:13:12.221 "cntlid": 131, 00:13:12.221 "qid": 0, 00:13:12.221 "state": "enabled", 00:13:12.221 "thread": "nvmf_tgt_poll_group_000", 00:13:12.221 "listen_address": { 00:13:12.221 "trtype": "TCP", 00:13:12.221 "adrfam": "IPv4", 00:13:12.221 "traddr": "10.0.0.2", 00:13:12.221 "trsvcid": "4420" 00:13:12.221 }, 00:13:12.221 "peer_address": { 00:13:12.221 "trtype": "TCP", 00:13:12.221 "adrfam": "IPv4", 00:13:12.221 "traddr": "10.0.0.1", 00:13:12.221 "trsvcid": "47292" 00:13:12.221 }, 00:13:12.221 "auth": { 00:13:12.221 "state": "completed", 00:13:12.221 "digest": "sha512", 00:13:12.221 "dhgroup": "ffdhe6144" 00:13:12.221 } 00:13:12.221 } 00:13:12.221 ]' 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.221 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.479 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.415 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.415 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.674 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.674 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.932 00:13:13.932 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.932 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.932 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.499 { 00:13:14.499 "cntlid": 133, 00:13:14.499 "qid": 0, 00:13:14.499 "state": "enabled", 00:13:14.499 "thread": "nvmf_tgt_poll_group_000", 00:13:14.499 "listen_address": { 00:13:14.499 "trtype": "TCP", 00:13:14.499 "adrfam": "IPv4", 00:13:14.499 "traddr": "10.0.0.2", 00:13:14.499 "trsvcid": "4420" 00:13:14.499 }, 00:13:14.499 "peer_address": { 00:13:14.499 "trtype": "TCP", 00:13:14.499 "adrfam": "IPv4", 00:13:14.499 "traddr": "10.0.0.1", 00:13:14.499 "trsvcid": "47306" 00:13:14.499 }, 00:13:14.499 "auth": { 00:13:14.499 "state": "completed", 00:13:14.499 "digest": "sha512", 00:13:14.499 "dhgroup": "ffdhe6144" 00:13:14.499 } 00:13:14.499 } 00:13:14.499 ]' 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.499 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.759 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.694 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.259 00:13:16.259 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.259 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.259 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.517 { 00:13:16.517 "cntlid": 135, 00:13:16.517 "qid": 0, 00:13:16.517 "state": "enabled", 00:13:16.517 "thread": "nvmf_tgt_poll_group_000", 00:13:16.517 "listen_address": { 00:13:16.517 "trtype": "TCP", 00:13:16.517 "adrfam": "IPv4", 00:13:16.517 "traddr": "10.0.0.2", 00:13:16.517 "trsvcid": "4420" 00:13:16.517 }, 00:13:16.517 "peer_address": { 00:13:16.517 "trtype": "TCP", 00:13:16.517 "adrfam": "IPv4", 00:13:16.517 "traddr": "10.0.0.1", 00:13:16.517 "trsvcid": "47338" 00:13:16.517 }, 00:13:16.517 "auth": { 00:13:16.517 "state": "completed", 00:13:16.517 "digest": "sha512", 00:13:16.517 "dhgroup": "ffdhe6144" 00:13:16.517 } 00:13:16.517 } 00:13:16.517 ]' 00:13:16.517 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.824 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.825 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.083 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:17.649 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.907 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.844 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.844 { 00:13:18.844 "cntlid": 137, 00:13:18.844 "qid": 0, 00:13:18.844 "state": "enabled", 00:13:18.844 "thread": "nvmf_tgt_poll_group_000", 00:13:18.844 "listen_address": { 00:13:18.844 "trtype": "TCP", 00:13:18.844 "adrfam": "IPv4", 00:13:18.844 "traddr": "10.0.0.2", 00:13:18.844 "trsvcid": "4420" 00:13:18.844 }, 00:13:18.844 "peer_address": { 00:13:18.844 "trtype": "TCP", 00:13:18.844 "adrfam": "IPv4", 00:13:18.844 "traddr": "10.0.0.1", 00:13:18.844 "trsvcid": "47360" 00:13:18.844 }, 00:13:18.844 "auth": { 00:13:18.844 "state": "completed", 00:13:18.844 "digest": "sha512", 00:13:18.844 "dhgroup": "ffdhe8192" 00:13:18.844 } 00:13:18.844 } 00:13:18.844 ]' 00:13:18.844 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.102 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.398 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:19.985 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.241 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.172 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.172 { 00:13:21.172 "cntlid": 139, 00:13:21.172 "qid": 0, 00:13:21.172 "state": "enabled", 00:13:21.172 "thread": "nvmf_tgt_poll_group_000", 00:13:21.172 "listen_address": { 00:13:21.172 "trtype": "TCP", 00:13:21.172 "adrfam": "IPv4", 00:13:21.172 "traddr": "10.0.0.2", 00:13:21.172 "trsvcid": "4420" 00:13:21.172 }, 00:13:21.172 "peer_address": { 00:13:21.172 "trtype": "TCP", 00:13:21.172 "adrfam": "IPv4", 00:13:21.172 "traddr": "10.0.0.1", 00:13:21.172 "trsvcid": "44972" 00:13:21.172 }, 00:13:21.172 "auth": { 00:13:21.172 "state": "completed", 00:13:21.172 "digest": "sha512", 00:13:21.172 "dhgroup": "ffdhe8192" 00:13:21.172 } 00:13:21.172 } 00:13:21.172 ]' 00:13:21.172 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.431 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.689 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:01:Y2I5OWM3MDIxMDViYzY5YTZiNDY1ZDNlMzgxMDkwZDGb4nms: --dhchap-ctrl-secret DHHC-1:02:NmJkMjczOTUzYTcwZjI3ZTg1MzJkNmJiZTEzZjBmYzI0Mzg5OTM2NzI3NTc0N2U3gIiCfg==: 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.255 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.512 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.513 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.513 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.446 00:13:23.446 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.446 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.446 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.705 { 00:13:23.705 "cntlid": 141, 00:13:23.705 "qid": 0, 00:13:23.705 "state": "enabled", 00:13:23.705 "thread": "nvmf_tgt_poll_group_000", 00:13:23.705 "listen_address": { 00:13:23.705 "trtype": "TCP", 00:13:23.705 "adrfam": "IPv4", 00:13:23.705 "traddr": "10.0.0.2", 00:13:23.705 "trsvcid": "4420" 00:13:23.705 }, 00:13:23.705 "peer_address": { 00:13:23.705 "trtype": "TCP", 00:13:23.705 "adrfam": "IPv4", 00:13:23.705 "traddr": "10.0.0.1", 00:13:23.705 "trsvcid": "44984" 00:13:23.705 }, 00:13:23.705 "auth": { 00:13:23.705 "state": "completed", 00:13:23.705 "digest": "sha512", 00:13:23.705 "dhgroup": "ffdhe8192" 00:13:23.705 } 00:13:23.705 } 00:13:23.705 ]' 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.705 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.973 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:02:OGJiYWNiYjdmODVlYjIxNWUwYWViNTA0ZTgyOWUyMmUzMGIwMDg3N2JhMGY1MWFiIa1S7Q==: --dhchap-ctrl-secret DHHC-1:01:NWE1ZjIxOGQ2MjU0ZDE5NDYxOGQ3NGJiNDNkODExZTgqptDX: 00:13:24.540 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:24.798 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.057 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.624 00:13:25.624 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.624 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.624 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.881 { 00:13:25.881 "cntlid": 143, 00:13:25.881 "qid": 0, 00:13:25.881 "state": "enabled", 00:13:25.881 "thread": "nvmf_tgt_poll_group_000", 00:13:25.881 "listen_address": { 00:13:25.881 "trtype": "TCP", 00:13:25.881 "adrfam": "IPv4", 00:13:25.881 "traddr": "10.0.0.2", 00:13:25.881 "trsvcid": "4420" 00:13:25.881 }, 00:13:25.881 "peer_address": { 00:13:25.881 "trtype": "TCP", 00:13:25.881 "adrfam": "IPv4", 00:13:25.881 "traddr": "10.0.0.1", 00:13:25.881 "trsvcid": "45026" 00:13:25.881 }, 00:13:25.881 "auth": { 00:13:25.881 "state": "completed", 00:13:25.881 "digest": "sha512", 00:13:25.881 "dhgroup": "ffdhe8192" 00:13:25.881 } 00:13:25.881 } 00:13:25.881 ]' 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.881 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.139 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.139 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.139 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.139 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.073 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.074 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.333 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.333 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.333 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.333 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.901 00:13:27.901 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.901 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.901 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.159 { 00:13:28.159 "cntlid": 145, 00:13:28.159 "qid": 0, 00:13:28.159 "state": "enabled", 00:13:28.159 "thread": "nvmf_tgt_poll_group_000", 00:13:28.159 "listen_address": { 00:13:28.159 "trtype": "TCP", 00:13:28.159 "adrfam": "IPv4", 00:13:28.159 "traddr": "10.0.0.2", 00:13:28.159 "trsvcid": "4420" 00:13:28.159 }, 00:13:28.159 "peer_address": { 00:13:28.159 "trtype": "TCP", 00:13:28.159 "adrfam": "IPv4", 00:13:28.159 "traddr": "10.0.0.1", 00:13:28.159 "trsvcid": "45050" 00:13:28.159 }, 00:13:28.159 "auth": { 00:13:28.159 "state": "completed", 00:13:28.159 "digest": "sha512", 00:13:28.159 "dhgroup": "ffdhe8192" 00:13:28.159 } 00:13:28.159 } 00:13:28.159 ]' 00:13:28.159 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.418 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.418 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.418 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:28.418 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.418 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.418 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.418 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.677 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:00:MDNjMmY3ZTY0MmM5NGUyMDZjZjBjNjJiZDBkZjhlM2RmNmI5NzIxMWE2NTllNzc5B9qouw==: --dhchap-ctrl-secret DHHC-1:03:ZjdlMmIyMWI4ZTczZjM2NTEyYjk2MDM0Njk2MmFmNzhhMDU2Nzk2YTQ0Y2FhZDRhY2E0NDQyMWRhM2M1NDczZicqe88=: 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.613 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:30.179 request: 00:13:30.179 { 00:13:30.179 "name": "nvme0", 00:13:30.179 "trtype": "tcp", 00:13:30.179 "traddr": "10.0.0.2", 00:13:30.179 "adrfam": "ipv4", 00:13:30.179 "trsvcid": "4420", 00:13:30.179 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:30.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:30.179 "prchk_reftag": false, 00:13:30.179 "prchk_guard": false, 00:13:30.179 "hdgst": false, 00:13:30.179 "ddgst": false, 00:13:30.179 "dhchap_key": "key2", 00:13:30.179 "method": "bdev_nvme_attach_controller", 00:13:30.179 "req_id": 1 00:13:30.179 } 00:13:30.179 Got JSON-RPC error response 00:13:30.179 response: 00:13:30.179 { 00:13:30.179 "code": -5, 00:13:30.179 "message": "Input/output error" 00:13:30.179 } 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:30.179 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.180 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.180 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.746 request: 00:13:30.746 { 00:13:30.746 "name": "nvme0", 00:13:30.746 "trtype": "tcp", 00:13:30.746 "traddr": "10.0.0.2", 00:13:30.746 "adrfam": "ipv4", 00:13:30.746 "trsvcid": "4420", 00:13:30.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:30.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:30.747 "prchk_reftag": false, 00:13:30.747 "prchk_guard": false, 00:13:30.747 "hdgst": false, 00:13:30.747 "ddgst": false, 00:13:30.747 "dhchap_key": "key1", 00:13:30.747 "dhchap_ctrlr_key": "ckey2", 00:13:30.747 "method": "bdev_nvme_attach_controller", 00:13:30.747 "req_id": 1 00:13:30.747 } 00:13:30.747 Got JSON-RPC error response 00:13:30.747 response: 00:13:30.747 { 00:13:30.747 "code": -5, 00:13:30.747 "message": "Input/output error" 00:13:30.747 } 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key1 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.747 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.314 request: 00:13:31.314 { 00:13:31.314 "name": "nvme0", 00:13:31.314 "trtype": "tcp", 00:13:31.314 "traddr": "10.0.0.2", 00:13:31.314 "adrfam": "ipv4", 00:13:31.314 "trsvcid": "4420", 00:13:31.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:31.314 "prchk_reftag": false, 00:13:31.314 "prchk_guard": false, 00:13:31.314 "hdgst": false, 00:13:31.314 "ddgst": false, 00:13:31.314 "dhchap_key": "key1", 00:13:31.314 "dhchap_ctrlr_key": "ckey1", 00:13:31.314 "method": "bdev_nvme_attach_controller", 00:13:31.314 "req_id": 1 00:13:31.314 } 00:13:31.314 Got JSON-RPC error response 00:13:31.314 response: 00:13:31.314 { 00:13:31.314 "code": -5, 00:13:31.314 "message": "Input/output error" 00:13:31.314 } 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69413 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69413 ']' 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69413 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69413 00:13:31.573 killing process with pid 69413 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69413' 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69413 00:13:31.573 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69413 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72498 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72498 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72498 ']' 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.831 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72498 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72498 ']' 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.768 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.334 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.334 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:33.334 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:33.334 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.334 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.334 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.910 00:13:33.910 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.910 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.910 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.170 { 00:13:34.170 "cntlid": 1, 00:13:34.170 "qid": 0, 00:13:34.170 "state": "enabled", 00:13:34.170 "thread": "nvmf_tgt_poll_group_000", 00:13:34.170 "listen_address": { 00:13:34.170 "trtype": "TCP", 00:13:34.170 "adrfam": "IPv4", 00:13:34.170 "traddr": "10.0.0.2", 00:13:34.170 "trsvcid": "4420" 00:13:34.170 }, 00:13:34.170 "peer_address": { 00:13:34.170 "trtype": "TCP", 00:13:34.170 "adrfam": "IPv4", 00:13:34.170 "traddr": "10.0.0.1", 00:13:34.170 "trsvcid": "33564" 00:13:34.170 }, 00:13:34.170 "auth": { 00:13:34.170 "state": "completed", 00:13:34.170 "digest": "sha512", 00:13:34.170 "dhgroup": "ffdhe8192" 00:13:34.170 } 00:13:34.170 } 00:13:34.170 ]' 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.170 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.170 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.428 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.428 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.428 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.428 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.428 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.686 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-secret DHHC-1:03:MDFiNTc3ZjFjMWEzYjE3NTIyN2MwYTFmODUxM2IzMmE5MDI1NDc2OGRlMDRlMDM1ZGI1YmM1ZWMxYTgzNWZiYsEQfD4=: 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --dhchap-key key3 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:35.253 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.819 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.077 request: 00:13:36.077 { 00:13:36.077 "name": "nvme0", 00:13:36.077 "trtype": "tcp", 00:13:36.077 "traddr": "10.0.0.2", 00:13:36.077 "adrfam": "ipv4", 00:13:36.077 "trsvcid": "4420", 00:13:36.077 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:36.077 "prchk_reftag": false, 00:13:36.077 "prchk_guard": false, 00:13:36.077 "hdgst": false, 00:13:36.077 "ddgst": false, 00:13:36.077 "dhchap_key": "key3", 00:13:36.077 "method": "bdev_nvme_attach_controller", 00:13:36.077 "req_id": 1 00:13:36.077 } 00:13:36.077 Got JSON-RPC error response 00:13:36.077 response: 00:13:36.077 { 00:13:36.077 "code": -5, 00:13:36.077 "message": "Input/output error" 00:13:36.077 } 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:36.077 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.335 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.593 request: 00:13:36.593 { 00:13:36.593 "name": "nvme0", 00:13:36.593 "trtype": "tcp", 00:13:36.593 "traddr": "10.0.0.2", 00:13:36.593 "adrfam": "ipv4", 00:13:36.593 "trsvcid": "4420", 00:13:36.593 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:36.593 "prchk_reftag": false, 00:13:36.593 "prchk_guard": false, 00:13:36.593 "hdgst": false, 00:13:36.593 "ddgst": false, 00:13:36.593 "dhchap_key": "key3", 00:13:36.593 "method": "bdev_nvme_attach_controller", 00:13:36.593 "req_id": 1 00:13:36.593 } 00:13:36.593 Got JSON-RPC error response 00:13:36.593 response: 00:13:36.593 { 00:13:36.593 "code": -5, 00:13:36.593 "message": "Input/output error" 00:13:36.593 } 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:36.593 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.594 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.594 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.853 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:36.854 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:37.112 request: 00:13:37.112 { 00:13:37.112 "name": "nvme0", 00:13:37.112 "trtype": "tcp", 00:13:37.112 "traddr": "10.0.0.2", 00:13:37.112 "adrfam": "ipv4", 00:13:37.112 "trsvcid": "4420", 00:13:37.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:37.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385", 00:13:37.112 "prchk_reftag": false, 00:13:37.112 "prchk_guard": false, 00:13:37.112 "hdgst": false, 00:13:37.112 "ddgst": false, 00:13:37.112 "dhchap_key": "key0", 00:13:37.112 "dhchap_ctrlr_key": "key1", 00:13:37.112 "method": "bdev_nvme_attach_controller", 00:13:37.112 "req_id": 1 00:13:37.112 } 00:13:37.112 Got JSON-RPC error response 00:13:37.112 response: 00:13:37.112 { 00:13:37.112 "code": -5, 00:13:37.112 "message": "Input/output error" 00:13:37.112 } 00:13:37.112 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:37.112 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:37.112 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:37.112 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:37.370 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:37.370 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:37.629 00:13:37.629 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:37.629 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.629 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:37.887 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.887 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.887 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69445 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69445 ']' 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69445 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.146 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69445 00:13:38.414 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.414 killing process with pid 69445 00:13:38.414 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.414 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69445' 00:13:38.414 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69445 00:13:38.414 22:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69445 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.732 rmmod nvme_tcp 00:13:38.732 rmmod nvme_fabrics 00:13:38.732 rmmod nvme_keyring 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72498 ']' 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72498 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72498 ']' 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72498 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72498 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:38.732 killing process with pid 72498 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72498' 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72498 00:13:38.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72498 00:13:38.990 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.990 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.990 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.990 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uSM /tmp/spdk.key-sha256.2UY /tmp/spdk.key-sha384.bdA /tmp/spdk.key-sha512.1h1 /tmp/spdk.key-sha512.ZNv /tmp/spdk.key-sha384.OfK /tmp/spdk.key-sha256.OPl '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:38.991 00:13:38.991 real 2m55.107s 00:13:38.991 user 6m58.167s 00:13:38.991 sys 0m27.914s 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.991 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.991 ************************************ 00:13:38.991 END TEST nvmf_auth_target 00:13:38.991 ************************************ 00:13:39.250 22:39:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.250 22:39:56 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:39.250 22:39:56 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:39.250 22:39:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:39.250 22:39:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.250 22:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.250 ************************************ 00:13:39.250 START TEST nvmf_bdevio_no_huge 00:13:39.250 ************************************ 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:39.250 * Looking for test storage... 00:13:39.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.250 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:39.251 Cannot find device "nvmf_tgt_br" 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:39.251 22:39:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.251 Cannot find device "nvmf_tgt_br2" 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:39.251 Cannot find device "nvmf_tgt_br" 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:39.251 Cannot find device "nvmf_tgt_br2" 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:39.251 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:39.510 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:39.511 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:39.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:39.772 00:13:39.772 --- 10.0.0.2 ping statistics --- 00:13:39.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.772 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:39.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:39.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:13:39.772 00:13:39.772 --- 10.0.0.3 ping statistics --- 00:13:39.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.772 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:39.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:13:39.772 00:13:39.772 --- 10.0.0.1 ping statistics --- 00:13:39.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.772 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72822 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72822 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72822 ']' 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.772 22:39:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:39.772 [2024-07-15 22:39:57.442525] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:39.772 [2024-07-15 22:39:57.442635] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:39.772 [2024-07-15 22:39:57.587794] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.033 [2024-07-15 22:39:57.762949] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.033 [2024-07-15 22:39:57.763033] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.033 [2024-07-15 22:39:57.763045] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.033 [2024-07-15 22:39:57.763055] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.033 [2024-07-15 22:39:57.763062] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.033 [2024-07-15 22:39:57.763175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.033 [2024-07-15 22:39:57.764052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.033 [2024-07-15 22:39:57.764150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.033 [2024-07-15 22:39:57.764149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.033 [2024-07-15 22:39:57.769844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.966 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 [2024-07-15 22:39:58.569677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 Malloc0 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 [2024-07-15 22:39:58.619572] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:40.967 { 00:13:40.967 "params": { 00:13:40.967 "name": "Nvme$subsystem", 00:13:40.967 "trtype": "$TEST_TRANSPORT", 00:13:40.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.967 "adrfam": "ipv4", 00:13:40.967 "trsvcid": "$NVMF_PORT", 00:13:40.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.967 "hdgst": ${hdgst:-false}, 00:13:40.967 "ddgst": ${ddgst:-false} 00:13:40.967 }, 00:13:40.967 "method": "bdev_nvme_attach_controller" 00:13:40.967 } 00:13:40.967 EOF 00:13:40.967 )") 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:40.967 22:39:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:40.967 "params": { 00:13:40.967 "name": "Nvme1", 00:13:40.967 "trtype": "tcp", 00:13:40.967 "traddr": "10.0.0.2", 00:13:40.967 "adrfam": "ipv4", 00:13:40.967 "trsvcid": "4420", 00:13:40.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:40.967 "hdgst": false, 00:13:40.967 "ddgst": false 00:13:40.967 }, 00:13:40.967 "method": "bdev_nvme_attach_controller" 00:13:40.967 }' 00:13:40.967 [2024-07-15 22:39:58.690251] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:40.967 [2024-07-15 22:39:58.690411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72858 ] 00:13:41.224 [2024-07-15 22:39:58.832520] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.224 [2024-07-15 22:39:58.994004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.224 [2024-07-15 22:39:58.994103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.224 [2024-07-15 22:39:58.994111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.224 [2024-07-15 22:39:59.007391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.482 I/O targets: 00:13:41.482 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:41.482 00:13:41.482 00:13:41.482 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.482 http://cunit.sourceforge.net/ 00:13:41.482 00:13:41.482 00:13:41.482 Suite: bdevio tests on: Nvme1n1 00:13:41.482 Test: blockdev write read block ...passed 00:13:41.482 Test: blockdev write zeroes read block ...passed 00:13:41.482 Test: blockdev write zeroes read no split ...passed 00:13:41.482 Test: blockdev write zeroes read split ...passed 00:13:41.482 Test: blockdev write zeroes read split partial ...passed 00:13:41.482 Test: blockdev reset ...[2024-07-15 22:39:59.227236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:41.482 [2024-07-15 22:39:59.227398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143ea10 (9): Bad file descriptor 00:13:41.482 [2024-07-15 22:39:59.240627] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:41.482 passed 00:13:41.482 Test: blockdev write read 8 blocks ...passed 00:13:41.482 Test: blockdev write read size > 128k ...passed 00:13:41.482 Test: blockdev write read invalid size ...passed 00:13:41.482 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:41.482 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:41.482 Test: blockdev write read max offset ...passed 00:13:41.482 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:41.482 Test: blockdev writev readv 8 blocks ...passed 00:13:41.482 Test: blockdev writev readv 30 x 1block ...passed 00:13:41.482 Test: blockdev writev readv block ...passed 00:13:41.482 Test: blockdev writev readv size > 128k ...passed 00:13:41.482 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:41.482 Test: blockdev comparev and writev ...[2024-07-15 22:39:59.248023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.248089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.248112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.248124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.248658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.248687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.248706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.248717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.249118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.249150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.249168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.249179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.249570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.249600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.249618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.482 [2024-07-15 22:39:59.249629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:41.482 passed 00:13:41.482 Test: blockdev nvme passthru rw ...passed 00:13:41.482 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:39:59.250441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.482 [2024-07-15 22:39:59.250475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.250590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.482 [2024-07-15 22:39:59.250612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.250722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.482 [2024-07-15 22:39:59.250747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:41.482 [2024-07-15 22:39:59.250857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.482 [2024-07-15 22:39:59.250893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:41.482 passed 00:13:41.482 Test: blockdev nvme admin passthru ...passed 00:13:41.482 Test: blockdev copy ...passed 00:13:41.482 00:13:41.482 Run Summary: Type Total Ran Passed Failed Inactive 00:13:41.482 suites 1 1 n/a 0 0 00:13:41.482 tests 23 23 23 0 0 00:13:41.482 asserts 152 152 152 0 n/a 00:13:41.482 00:13:41.482 Elapsed time = 0.228 seconds 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.048 rmmod nvme_tcp 00:13:42.048 rmmod nvme_fabrics 00:13:42.048 rmmod nvme_keyring 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72822 ']' 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72822 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72822 ']' 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72822 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72822 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72822' 00:13:42.048 killing process with pid 72822 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72822 00:13:42.048 22:39:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72822 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:42.617 00:13:42.617 real 0m3.503s 00:13:42.617 user 0m11.350s 00:13:42.617 sys 0m1.437s 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.617 22:40:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.617 ************************************ 00:13:42.617 END TEST nvmf_bdevio_no_huge 00:13:42.617 ************************************ 00:13:42.617 22:40:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:42.617 22:40:00 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:42.617 22:40:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:42.617 22:40:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.617 22:40:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.617 ************************************ 00:13:42.617 START TEST nvmf_tls 00:13:42.617 ************************************ 00:13:42.617 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:42.876 * Looking for test storage... 00:13:42.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.876 22:40:00 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:42.877 Cannot find device "nvmf_tgt_br" 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.877 Cannot find device "nvmf_tgt_br2" 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:42.877 Cannot find device "nvmf_tgt_br" 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:42.877 Cannot find device "nvmf_tgt_br2" 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.877 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:43.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:13:43.136 00:13:43.136 --- 10.0.0.2 ping statistics --- 00:13:43.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.136 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:43.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:43.136 00:13:43.136 --- 10.0.0.3 ping statistics --- 00:13:43.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.136 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:13:43.136 00:13:43.136 --- 10.0.0.1 ping statistics --- 00:13:43.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.136 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73041 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73041 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73041 ']' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.136 22:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.136 [2024-07-15 22:40:00.945916] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:43.137 [2024-07-15 22:40:00.946040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.395 [2024-07-15 22:40:01.085808] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.654 [2024-07-15 22:40:01.233193] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.654 [2024-07-15 22:40:01.233259] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.654 [2024-07-15 22:40:01.233281] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.654 [2024-07-15 22:40:01.233290] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.654 [2024-07-15 22:40:01.233297] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.654 [2024-07-15 22:40:01.233330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:44.222 22:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:44.488 true 00:13:44.488 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.488 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:44.756 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:44.756 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:44.756 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:45.014 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.014 22:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:45.580 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:45.580 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:45.580 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:45.839 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:45.839 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.098 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:46.098 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:46.098 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.098 22:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:46.433 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:46.433 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:46.433 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:46.691 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.691 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:46.950 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:46.950 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:46.950 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:47.207 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.207 22:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:47.465 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.INg4NQRaoZ 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.zhBu0RObRN 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.INg4NQRaoZ 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zhBu0RObRN 00:13:47.466 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:47.724 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:47.983 [2024-07-15 22:40:05.741307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.983 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.INg4NQRaoZ 00:13:47.983 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.INg4NQRaoZ 00:13:47.983 22:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.242 [2024-07-15 22:40:06.037239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.242 22:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:48.501 22:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:49.068 [2024-07-15 22:40:06.613390] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:49.068 [2024-07-15 22:40:06.613635] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.068 22:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.068 malloc0 00:13:49.069 22:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.327 22:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.INg4NQRaoZ 00:13:49.586 [2024-07-15 22:40:07.356573] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:49.587 22:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.INg4NQRaoZ 00:14:01.792 Initializing NVMe Controllers 00:14:01.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.792 Initialization complete. Launching workers. 00:14:01.792 ======================================================== 00:14:01.792 Latency(us) 00:14:01.792 Device Information : IOPS MiB/s Average min max 00:14:01.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9348.37 36.52 6847.90 1817.36 17645.08 00:14:01.792 ======================================================== 00:14:01.792 Total : 9348.37 36.52 6847.90 1817.36 17645.08 00:14:01.792 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.INg4NQRaoZ 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.INg4NQRaoZ' 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73284 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73284 /var/tmp/bdevperf.sock 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73284 ']' 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.792 22:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.792 [2024-07-15 22:40:17.644487] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:01.792 [2024-07-15 22:40:17.644590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73284 ] 00:14:01.792 [2024-07-15 22:40:17.784894] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.792 [2024-07-15 22:40:17.911725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.792 [2024-07-15 22:40:17.970795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.792 22:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.792 22:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:01.792 22:40:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.INg4NQRaoZ 00:14:01.792 [2024-07-15 22:40:18.819188] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.792 [2024-07-15 22:40:18.819331] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:01.792 TLSTESTn1 00:14:01.792 22:40:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:01.792 Running I/O for 10 seconds... 00:14:11.778 00:14:11.778 Latency(us) 00:14:11.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.778 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:11.778 Verification LBA range: start 0x0 length 0x2000 00:14:11.778 TLSTESTn1 : 10.02 3927.47 15.34 0.00 0.00 32529.06 6762.12 31218.97 00:14:11.778 =================================================================================================================== 00:14:11.778 Total : 3927.47 15.34 0.00 0.00 32529.06 6762.12 31218.97 00:14:11.778 0 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73284 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73284 ']' 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73284 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73284 00:14:11.778 killing process with pid 73284 00:14:11.778 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.778 00:14:11.778 Latency(us) 00:14:11.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.778 =================================================================================================================== 00:14:11.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73284' 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73284 00:14:11.778 [2024-07-15 22:40:29.084320] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73284 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zhBu0RObRN 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zhBu0RObRN 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zhBu0RObRN 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zhBu0RObRN' 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73413 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.778 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73413 /var/tmp/bdevperf.sock 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73413 ']' 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.779 22:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.779 [2024-07-15 22:40:29.360755] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:11.779 [2024-07-15 22:40:29.360854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73413 ] 00:14:11.779 [2024-07-15 22:40:29.493375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.779 [2024-07-15 22:40:29.598364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.036 [2024-07-15 22:40:29.650093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.602 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.602 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:12.602 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zhBu0RObRN 00:14:12.860 [2024-07-15 22:40:30.590169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.860 [2024-07-15 22:40:30.590339] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.860 [2024-07-15 22:40:30.596174] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:12.860 [2024-07-15 22:40:30.597092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15613d0 (107): Transport endpoint is not connected 00:14:12.860 [2024-07-15 22:40:30.598084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15613d0 (9): Bad file descriptor 00:14:12.860 [2024-07-15 22:40:30.599081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:12.860 [2024-07-15 22:40:30.599106] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:12.860 [2024-07-15 22:40:30.599121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:12.860 request: 00:14:12.860 { 00:14:12.860 "name": "TLSTEST", 00:14:12.860 "trtype": "tcp", 00:14:12.860 "traddr": "10.0.0.2", 00:14:12.860 "adrfam": "ipv4", 00:14:12.860 "trsvcid": "4420", 00:14:12.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.860 "prchk_reftag": false, 00:14:12.860 "prchk_guard": false, 00:14:12.860 "hdgst": false, 00:14:12.860 "ddgst": false, 00:14:12.860 "psk": "/tmp/tmp.zhBu0RObRN", 00:14:12.860 "method": "bdev_nvme_attach_controller", 00:14:12.860 "req_id": 1 00:14:12.860 } 00:14:12.860 Got JSON-RPC error response 00:14:12.860 response: 00:14:12.860 { 00:14:12.860 "code": -5, 00:14:12.860 "message": "Input/output error" 00:14:12.860 } 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73413 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73413 ']' 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73413 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73413 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:12.860 killing process with pid 73413 00:14:12.860 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.860 00:14:12.860 Latency(us) 00:14:12.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.860 =================================================================================================================== 00:14:12.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73413' 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73413 00:14:12.860 [2024-07-15 22:40:30.649750] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:12.860 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73413 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.INg4NQRaoZ 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.INg4NQRaoZ 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:13.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.INg4NQRaoZ 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.INg4NQRaoZ' 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73441 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73441 /var/tmp/bdevperf.sock 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73441 ']' 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.119 22:40:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.119 [2024-07-15 22:40:30.936086] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:13.119 [2024-07-15 22:40:30.936189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73441 ] 00:14:13.377 [2024-07-15 22:40:31.071095] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.377 [2024-07-15 22:40:31.182087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.635 [2024-07-15 22:40:31.234272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.202 22:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.202 22:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:14.202 22:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.INg4NQRaoZ 00:14:14.460 [2024-07-15 22:40:32.163768] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.460 [2024-07-15 22:40:32.163916] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:14.460 [2024-07-15 22:40:32.168993] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:14.460 [2024-07-15 22:40:32.169032] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:14.460 [2024-07-15 22:40:32.169080] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:14.460 [2024-07-15 22:40:32.169695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f83d0 (107): Transport endpoint is not connected 00:14:14.460 [2024-07-15 22:40:32.170685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f83d0 (9): Bad file descriptor 00:14:14.460 [2024-07-15 22:40:32.171680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:14.460 [2024-07-15 22:40:32.171703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:14.460 [2024-07-15 22:40:32.171735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:14.460 request: 00:14:14.460 { 00:14:14.460 "name": "TLSTEST", 00:14:14.460 "trtype": "tcp", 00:14:14.460 "traddr": "10.0.0.2", 00:14:14.460 "adrfam": "ipv4", 00:14:14.460 "trsvcid": "4420", 00:14:14.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.460 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:14.460 "prchk_reftag": false, 00:14:14.460 "prchk_guard": false, 00:14:14.460 "hdgst": false, 00:14:14.460 "ddgst": false, 00:14:14.460 "psk": "/tmp/tmp.INg4NQRaoZ", 00:14:14.460 "method": "bdev_nvme_attach_controller", 00:14:14.460 "req_id": 1 00:14:14.460 } 00:14:14.460 Got JSON-RPC error response 00:14:14.460 response: 00:14:14.460 { 00:14:14.460 "code": -5, 00:14:14.460 "message": "Input/output error" 00:14:14.460 } 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73441 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73441 ']' 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73441 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73441 00:14:14.460 killing process with pid 73441 00:14:14.460 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.460 00:14:14.460 Latency(us) 00:14:14.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.460 =================================================================================================================== 00:14:14.460 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73441' 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73441 00:14:14.460 [2024-07-15 22:40:32.221068] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:14.460 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73441 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.INg4NQRaoZ 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.INg4NQRaoZ 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:14.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.INg4NQRaoZ 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.INg4NQRaoZ' 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73463 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73463 /var/tmp/bdevperf.sock 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73463 ']' 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.719 22:40:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.719 [2024-07-15 22:40:32.493586] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:14.719 [2024-07-15 22:40:32.493725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73463 ] 00:14:14.978 [2024-07-15 22:40:32.634517] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.978 [2024-07-15 22:40:32.737596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.978 [2024-07-15 22:40:32.791148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.INg4NQRaoZ 00:14:15.913 [2024-07-15 22:40:33.663868] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.913 [2024-07-15 22:40:33.664027] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.913 [2024-07-15 22:40:33.669127] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:15.913 [2024-07-15 22:40:33.669165] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:15.913 [2024-07-15 22:40:33.669213] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:15.913 [2024-07-15 22:40:33.669831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f13d0 (107): Transport endpoint is not connected 00:14:15.913 [2024-07-15 22:40:33.670819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f13d0 (9): Bad file descriptor 00:14:15.913 [2024-07-15 22:40:33.671816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:15.913 [2024-07-15 22:40:33.671855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:15.913 [2024-07-15 22:40:33.671885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:15.913 request: 00:14:15.913 { 00:14:15.913 "name": "TLSTEST", 00:14:15.913 "trtype": "tcp", 00:14:15.913 "traddr": "10.0.0.2", 00:14:15.913 "adrfam": "ipv4", 00:14:15.913 "trsvcid": "4420", 00:14:15.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:15.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.913 "prchk_reftag": false, 00:14:15.913 "prchk_guard": false, 00:14:15.913 "hdgst": false, 00:14:15.913 "ddgst": false, 00:14:15.913 "psk": "/tmp/tmp.INg4NQRaoZ", 00:14:15.913 "method": "bdev_nvme_attach_controller", 00:14:15.913 "req_id": 1 00:14:15.913 } 00:14:15.913 Got JSON-RPC error response 00:14:15.913 response: 00:14:15.913 { 00:14:15.913 "code": -5, 00:14:15.913 "message": "Input/output error" 00:14:15.913 } 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73463 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73463 ']' 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73463 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73463 00:14:15.913 killing process with pid 73463 00:14:15.913 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.913 00:14:15.913 Latency(us) 00:14:15.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.913 =================================================================================================================== 00:14:15.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73463' 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73463 00:14:15.913 [2024-07-15 22:40:33.720876] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:15.913 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73463 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73495 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73495 /var/tmp/bdevperf.sock 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73495 ']' 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.172 22:40:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.430 [2024-07-15 22:40:34.017238] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:16.430 [2024-07-15 22:40:34.017331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73495 ] 00:14:16.430 [2024-07-15 22:40:34.149323] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.430 [2024-07-15 22:40:34.251686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.687 [2024-07-15 22:40:34.303977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.254 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.254 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:17.254 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:17.514 [2024-07-15 22:40:35.226747] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:17.514 [2024-07-15 22:40:35.229518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b4da0 (9): Bad file descriptor 00:14:17.514 [2024-07-15 22:40:35.230514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:17.514 [2024-07-15 22:40:35.230540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:17.514 [2024-07-15 22:40:35.230554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:17.514 request: 00:14:17.514 { 00:14:17.514 "name": "TLSTEST", 00:14:17.514 "trtype": "tcp", 00:14:17.514 "traddr": "10.0.0.2", 00:14:17.514 "adrfam": "ipv4", 00:14:17.514 "trsvcid": "4420", 00:14:17.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.514 "prchk_reftag": false, 00:14:17.514 "prchk_guard": false, 00:14:17.514 "hdgst": false, 00:14:17.514 "ddgst": false, 00:14:17.514 "method": "bdev_nvme_attach_controller", 00:14:17.514 "req_id": 1 00:14:17.514 } 00:14:17.514 Got JSON-RPC error response 00:14:17.514 response: 00:14:17.514 { 00:14:17.514 "code": -5, 00:14:17.514 "message": "Input/output error" 00:14:17.514 } 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73495 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73495 ']' 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73495 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73495 00:14:17.514 killing process with pid 73495 00:14:17.514 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.514 00:14:17.514 Latency(us) 00:14:17.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.514 =================================================================================================================== 00:14:17.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73495' 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73495 00:14:17.514 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73495 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 73041 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73041 ']' 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73041 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73041 00:14:17.773 killing process with pid 73041 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73041' 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73041 00:14:17.773 [2024-07-15 22:40:35.539738] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:17.773 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73041 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lLXS4YOAHK 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lLXS4YOAHK 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73528 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73528 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73528 ']' 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.032 22:40:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.290 [2024-07-15 22:40:35.903046] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:18.290 [2024-07-15 22:40:35.903135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.290 [2024-07-15 22:40:36.043242] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.549 [2024-07-15 22:40:36.142859] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.549 [2024-07-15 22:40:36.142932] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.549 [2024-07-15 22:40:36.142944] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.549 [2024-07-15 22:40:36.142952] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.549 [2024-07-15 22:40:36.142958] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.549 [2024-07-15 22:40:36.142984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.549 [2024-07-15 22:40:36.197121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lLXS4YOAHK 00:14:19.115 22:40:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.431 [2024-07-15 22:40:37.160257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.431 22:40:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:19.709 22:40:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:19.967 [2024-07-15 22:40:37.689472] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.968 [2024-07-15 22:40:37.689689] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.968 22:40:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:20.226 malloc0 00:14:20.226 22:40:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.488 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:20.746 [2024-07-15 22:40:38.512492] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lLXS4YOAHK 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lLXS4YOAHK' 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73587 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73587 /var/tmp/bdevperf.sock 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73587 ']' 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.746 22:40:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.004 [2024-07-15 22:40:38.592392] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:21.004 [2024-07-15 22:40:38.592924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73587 ] 00:14:21.004 [2024-07-15 22:40:38.733341] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.261 [2024-07-15 22:40:38.857768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.261 [2024-07-15 22:40:38.914733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.195 22:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.195 22:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:22.195 22:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:22.195 [2024-07-15 22:40:39.900375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.195 [2024-07-15 22:40:39.900810] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:22.195 TLSTESTn1 00:14:22.195 22:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:22.454 Running I/O for 10 seconds... 00:14:32.422 00:14:32.422 Latency(us) 00:14:32.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.422 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:32.422 Verification LBA range: start 0x0 length 0x2000 00:14:32.422 TLSTESTn1 : 10.02 3750.23 14.65 0.00 0.00 34062.71 8162.21 29669.93 00:14:32.422 =================================================================================================================== 00:14:32.422 Total : 3750.23 14.65 0.00 0.00 34062.71 8162.21 29669.93 00:14:32.422 0 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73587 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73587 ']' 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73587 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73587 00:14:32.422 killing process with pid 73587 00:14:32.422 Received shutdown signal, test time was about 10.000000 seconds 00:14:32.422 00:14:32.422 Latency(us) 00:14:32.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.422 =================================================================================================================== 00:14:32.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73587' 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73587 00:14:32.422 [2024-07-15 22:40:50.189012] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:32.422 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73587 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lLXS4YOAHK 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lLXS4YOAHK 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lLXS4YOAHK 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lLXS4YOAHK 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lLXS4YOAHK' 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73723 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73723 /var/tmp/bdevperf.sock 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73723 ']' 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.681 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 [2024-07-15 22:40:50.479327] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:32.681 [2024-07-15 22:40:50.479710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73723 ] 00:14:32.945 [2024-07-15 22:40:50.616641] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.945 [2024-07-15 22:40:50.731596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.219 [2024-07-15 22:40:50.783488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.786 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.786 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:33.786 22:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:34.053 [2024-07-15 22:40:51.742157] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.053 [2024-07-15 22:40:51.742521] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:34.053 [2024-07-15 22:40:51.742643] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lLXS4YOAHK 00:14:34.053 request: 00:14:34.053 { 00:14:34.053 "name": "TLSTEST", 00:14:34.053 "trtype": "tcp", 00:14:34.053 "traddr": "10.0.0.2", 00:14:34.053 "adrfam": "ipv4", 00:14:34.053 "trsvcid": "4420", 00:14:34.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.053 "prchk_reftag": false, 00:14:34.053 "prchk_guard": false, 00:14:34.053 "hdgst": false, 00:14:34.053 "ddgst": false, 00:14:34.053 "psk": "/tmp/tmp.lLXS4YOAHK", 00:14:34.053 "method": "bdev_nvme_attach_controller", 00:14:34.053 "req_id": 1 00:14:34.053 } 00:14:34.053 Got JSON-RPC error response 00:14:34.053 response: 00:14:34.053 { 00:14:34.053 "code": -1, 00:14:34.053 "message": "Operation not permitted" 00:14:34.053 } 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73723 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73723 ']' 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73723 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73723 00:14:34.053 killing process with pid 73723 00:14:34.053 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.053 00:14:34.053 Latency(us) 00:14:34.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.053 =================================================================================================================== 00:14:34.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73723' 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73723 00:14:34.053 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73723 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73528 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73528 ']' 00:14:34.313 22:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73528 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73528 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.313 killing process with pid 73528 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73528' 00:14:34.313 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73528 00:14:34.314 [2024-07-15 22:40:52.026302] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:34.314 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73528 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73760 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73760 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.572 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73760 ']' 00:14:34.573 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.573 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.573 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.573 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.573 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.832 [2024-07-15 22:40:52.445333] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:34.832 [2024-07-15 22:40:52.445496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.832 [2024-07-15 22:40:52.597633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.090 [2024-07-15 22:40:52.761292] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.090 [2024-07-15 22:40:52.761367] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.090 [2024-07-15 22:40:52.761381] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.090 [2024-07-15 22:40:52.761392] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.090 [2024-07-15 22:40:52.761402] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.090 [2024-07-15 22:40:52.761447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.090 [2024-07-15 22:40:52.839383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:35.656 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.656 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:35.656 22:40:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.656 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.656 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lLXS4YOAHK 00:14:35.913 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:36.171 [2024-07-15 22:40:53.757265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.171 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.453 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:36.719 [2024-07-15 22:40:54.329350] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:36.719 [2024-07-15 22:40:54.329627] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.719 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:36.977 malloc0 00:14:36.977 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.235 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:37.493 [2024-07-15 22:40:55.111333] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:37.493 [2024-07-15 22:40:55.111386] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:37.493 [2024-07-15 22:40:55.111422] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:37.493 request: 00:14:37.493 { 00:14:37.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.493 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.493 "psk": "/tmp/tmp.lLXS4YOAHK", 00:14:37.493 "method": "nvmf_subsystem_add_host", 00:14:37.493 "req_id": 1 00:14:37.493 } 00:14:37.493 Got JSON-RPC error response 00:14:37.493 response: 00:14:37.493 { 00:14:37.493 "code": -32603, 00:14:37.493 "message": "Internal error" 00:14:37.493 } 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73760 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73760 ']' 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73760 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73760 00:14:37.493 killing process with pid 73760 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73760' 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73760 00:14:37.493 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73760 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lLXS4YOAHK 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73824 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73824 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73824 ']' 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.752 22:40:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.752 [2024-07-15 22:40:55.461830] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:37.752 [2024-07-15 22:40:55.461938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.010 [2024-07-15 22:40:55.606322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.010 [2024-07-15 22:40:55.700354] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.011 [2024-07-15 22:40:55.700408] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.011 [2024-07-15 22:40:55.700419] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.011 [2024-07-15 22:40:55.700428] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.011 [2024-07-15 22:40:55.700435] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.011 [2024-07-15 22:40:55.700461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.011 [2024-07-15 22:40:55.753646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lLXS4YOAHK 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:38.971 [2024-07-15 22:40:56.770028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.971 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.229 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:39.487 [2024-07-15 22:40:57.234138] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.487 [2024-07-15 22:40:57.234406] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.487 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:39.745 malloc0 00:14:39.745 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:40.004 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:40.263 [2024-07-15 22:40:57.938494] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73873 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73873 /var/tmp/bdevperf.sock 00:14:40.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73873 ']' 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.263 22:40:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.263 [2024-07-15 22:40:58.017119] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:40.263 [2024-07-15 22:40:58.017438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73873 ] 00:14:40.522 [2024-07-15 22:40:58.160490] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.522 [2024-07-15 22:40:58.285181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.522 [2024-07-15 22:40:58.343235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.455 22:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.455 22:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.456 22:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:41.456 [2024-07-15 22:40:59.189175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:41.456 [2024-07-15 22:40:59.189315] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:41.456 TLSTESTn1 00:14:41.456 22:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:42.021 22:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:42.021 "subsystems": [ 00:14:42.021 { 00:14:42.021 "subsystem": "keyring", 00:14:42.021 "config": [] 00:14:42.021 }, 00:14:42.021 { 00:14:42.022 "subsystem": "iobuf", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "iobuf_set_options", 00:14:42.022 "params": { 00:14:42.022 "small_pool_count": 8192, 00:14:42.022 "large_pool_count": 1024, 00:14:42.022 "small_bufsize": 8192, 00:14:42.022 "large_bufsize": 135168 00:14:42.022 } 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "sock", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "sock_set_default_impl", 00:14:42.022 "params": { 00:14:42.022 "impl_name": "uring" 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "sock_impl_set_options", 00:14:42.022 "params": { 00:14:42.022 "impl_name": "ssl", 00:14:42.022 "recv_buf_size": 4096, 00:14:42.022 "send_buf_size": 4096, 00:14:42.022 "enable_recv_pipe": true, 00:14:42.022 "enable_quickack": false, 00:14:42.022 "enable_placement_id": 0, 00:14:42.022 "enable_zerocopy_send_server": true, 00:14:42.022 "enable_zerocopy_send_client": false, 00:14:42.022 "zerocopy_threshold": 0, 00:14:42.022 "tls_version": 0, 00:14:42.022 "enable_ktls": false 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "sock_impl_set_options", 00:14:42.022 "params": { 00:14:42.022 "impl_name": "posix", 00:14:42.022 "recv_buf_size": 2097152, 00:14:42.022 "send_buf_size": 2097152, 00:14:42.022 "enable_recv_pipe": true, 00:14:42.022 "enable_quickack": false, 00:14:42.022 "enable_placement_id": 0, 00:14:42.022 "enable_zerocopy_send_server": true, 00:14:42.022 "enable_zerocopy_send_client": false, 00:14:42.022 "zerocopy_threshold": 0, 00:14:42.022 "tls_version": 0, 00:14:42.022 "enable_ktls": false 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "sock_impl_set_options", 00:14:42.022 "params": { 00:14:42.022 "impl_name": "uring", 00:14:42.022 "recv_buf_size": 2097152, 00:14:42.022 "send_buf_size": 2097152, 00:14:42.022 "enable_recv_pipe": true, 00:14:42.022 "enable_quickack": false, 00:14:42.022 "enable_placement_id": 0, 00:14:42.022 "enable_zerocopy_send_server": false, 00:14:42.022 "enable_zerocopy_send_client": false, 00:14:42.022 "zerocopy_threshold": 0, 00:14:42.022 "tls_version": 0, 00:14:42.022 "enable_ktls": false 00:14:42.022 } 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "vmd", 00:14:42.022 "config": [] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "accel", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "accel_set_options", 00:14:42.022 "params": { 00:14:42.022 "small_cache_size": 128, 00:14:42.022 "large_cache_size": 16, 00:14:42.022 "task_count": 2048, 00:14:42.022 "sequence_count": 2048, 00:14:42.022 "buf_count": 2048 00:14:42.022 } 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "bdev", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "bdev_set_options", 00:14:42.022 "params": { 00:14:42.022 "bdev_io_pool_size": 65535, 00:14:42.022 "bdev_io_cache_size": 256, 00:14:42.022 "bdev_auto_examine": true, 00:14:42.022 "iobuf_small_cache_size": 128, 00:14:42.022 "iobuf_large_cache_size": 16 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_raid_set_options", 00:14:42.022 "params": { 00:14:42.022 "process_window_size_kb": 1024 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_iscsi_set_options", 00:14:42.022 "params": { 00:14:42.022 "timeout_sec": 30 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_nvme_set_options", 00:14:42.022 "params": { 00:14:42.022 "action_on_timeout": "none", 00:14:42.022 "timeout_us": 0, 00:14:42.022 "timeout_admin_us": 0, 00:14:42.022 "keep_alive_timeout_ms": 10000, 00:14:42.022 "arbitration_burst": 0, 00:14:42.022 "low_priority_weight": 0, 00:14:42.022 "medium_priority_weight": 0, 00:14:42.022 "high_priority_weight": 0, 00:14:42.022 "nvme_adminq_poll_period_us": 10000, 00:14:42.022 "nvme_ioq_poll_period_us": 0, 00:14:42.022 "io_queue_requests": 0, 00:14:42.022 "delay_cmd_submit": true, 00:14:42.022 "transport_retry_count": 4, 00:14:42.022 "bdev_retry_count": 3, 00:14:42.022 "transport_ack_timeout": 0, 00:14:42.022 "ctrlr_loss_timeout_sec": 0, 00:14:42.022 "reconnect_delay_sec": 0, 00:14:42.022 "fast_io_fail_timeout_sec": 0, 00:14:42.022 "disable_auto_failback": false, 00:14:42.022 "generate_uuids": false, 00:14:42.022 "transport_tos": 0, 00:14:42.022 "nvme_error_stat": false, 00:14:42.022 "rdma_srq_size": 0, 00:14:42.022 "io_path_stat": false, 00:14:42.022 "allow_accel_sequence": false, 00:14:42.022 "rdma_max_cq_size": 0, 00:14:42.022 "rdma_cm_event_timeout_ms": 0, 00:14:42.022 "dhchap_digests": [ 00:14:42.022 "sha256", 00:14:42.022 "sha384", 00:14:42.022 "sha512" 00:14:42.022 ], 00:14:42.022 "dhchap_dhgroups": [ 00:14:42.022 "null", 00:14:42.022 "ffdhe2048", 00:14:42.022 "ffdhe3072", 00:14:42.022 "ffdhe4096", 00:14:42.022 "ffdhe6144", 00:14:42.022 "ffdhe8192" 00:14:42.022 ] 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_nvme_set_hotplug", 00:14:42.022 "params": { 00:14:42.022 "period_us": 100000, 00:14:42.022 "enable": false 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_malloc_create", 00:14:42.022 "params": { 00:14:42.022 "name": "malloc0", 00:14:42.022 "num_blocks": 8192, 00:14:42.022 "block_size": 4096, 00:14:42.022 "physical_block_size": 4096, 00:14:42.022 "uuid": "3d446f12-7bdb-49f8-82da-7462020bdc86", 00:14:42.022 "optimal_io_boundary": 0 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "bdev_wait_for_examine" 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "nbd", 00:14:42.022 "config": [] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "scheduler", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "framework_set_scheduler", 00:14:42.022 "params": { 00:14:42.022 "name": "static" 00:14:42.022 } 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "subsystem": "nvmf", 00:14:42.022 "config": [ 00:14:42.022 { 00:14:42.022 "method": "nvmf_set_config", 00:14:42.022 "params": { 00:14:42.022 "discovery_filter": "match_any", 00:14:42.022 "admin_cmd_passthru": { 00:14:42.022 "identify_ctrlr": false 00:14:42.022 } 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "nvmf_set_max_subsystems", 00:14:42.022 "params": { 00:14:42.022 "max_subsystems": 1024 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "nvmf_set_crdt", 00:14:42.022 "params": { 00:14:42.022 "crdt1": 0, 00:14:42.022 "crdt2": 0, 00:14:42.022 "crdt3": 0 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "method": "nvmf_create_transport", 00:14:42.022 "params": { 00:14:42.022 "trtype": "TCP", 00:14:42.022 "max_queue_depth": 128, 00:14:42.022 "max_io_qpairs_per_ctrlr": 127, 00:14:42.022 "in_capsule_data_size": 4096, 00:14:42.022 "max_io_size": 131072, 00:14:42.022 "io_unit_size": 131072, 00:14:42.022 "max_aq_depth": 128, 00:14:42.022 "num_shared_buffers": 511, 00:14:42.022 "buf_cache_size": 4294967295, 00:14:42.022 "dif_insert_or_strip": false, 00:14:42.022 "zcopy": false, 00:14:42.022 "c2h_success": false, 00:14:42.022 "sock_priority": 0, 00:14:42.022 "abort_timeout_sec": 1, 00:14:42.022 "ack_timeout": 0, 00:14:42.022 "data_wr_pool_size": 0 00:14:42.022 } 00:14:42.022 }, 00:14:42.022 { 00:14:42.023 "method": "nvmf_create_subsystem", 00:14:42.023 "params": { 00:14:42.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.023 "allow_any_host": false, 00:14:42.023 "serial_number": "SPDK00000000000001", 00:14:42.023 "model_number": "SPDK bdev Controller", 00:14:42.023 "max_namespaces": 10, 00:14:42.023 "min_cntlid": 1, 00:14:42.023 "max_cntlid": 65519, 00:14:42.023 "ana_reporting": false 00:14:42.023 } 00:14:42.023 }, 00:14:42.023 { 00:14:42.023 "method": "nvmf_subsystem_add_host", 00:14:42.023 "params": { 00:14:42.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.023 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.023 "psk": "/tmp/tmp.lLXS4YOAHK" 00:14:42.023 } 00:14:42.023 }, 00:14:42.023 { 00:14:42.023 "method": "nvmf_subsystem_add_ns", 00:14:42.023 "params": { 00:14:42.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.023 "namespace": { 00:14:42.023 "nsid": 1, 00:14:42.023 "bdev_name": "malloc0", 00:14:42.023 "nguid": "3D446F127BDB49F882DA7462020BDC86", 00:14:42.023 "uuid": "3d446f12-7bdb-49f8-82da-7462020bdc86", 00:14:42.023 "no_auto_visible": false 00:14:42.023 } 00:14:42.023 } 00:14:42.023 }, 00:14:42.023 { 00:14:42.023 "method": "nvmf_subsystem_add_listener", 00:14:42.023 "params": { 00:14:42.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.023 "listen_address": { 00:14:42.023 "trtype": "TCP", 00:14:42.023 "adrfam": "IPv4", 00:14:42.023 "traddr": "10.0.0.2", 00:14:42.023 "trsvcid": "4420" 00:14:42.023 }, 00:14:42.023 "secure_channel": true 00:14:42.023 } 00:14:42.023 } 00:14:42.023 ] 00:14:42.023 } 00:14:42.023 ] 00:14:42.023 }' 00:14:42.023 22:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:42.281 22:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:42.281 "subsystems": [ 00:14:42.281 { 00:14:42.281 "subsystem": "keyring", 00:14:42.281 "config": [] 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "subsystem": "iobuf", 00:14:42.281 "config": [ 00:14:42.281 { 00:14:42.281 "method": "iobuf_set_options", 00:14:42.281 "params": { 00:14:42.281 "small_pool_count": 8192, 00:14:42.281 "large_pool_count": 1024, 00:14:42.281 "small_bufsize": 8192, 00:14:42.281 "large_bufsize": 135168 00:14:42.281 } 00:14:42.281 } 00:14:42.281 ] 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "subsystem": "sock", 00:14:42.281 "config": [ 00:14:42.281 { 00:14:42.281 "method": "sock_set_default_impl", 00:14:42.281 "params": { 00:14:42.281 "impl_name": "uring" 00:14:42.281 } 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "method": "sock_impl_set_options", 00:14:42.281 "params": { 00:14:42.281 "impl_name": "ssl", 00:14:42.281 "recv_buf_size": 4096, 00:14:42.281 "send_buf_size": 4096, 00:14:42.281 "enable_recv_pipe": true, 00:14:42.281 "enable_quickack": false, 00:14:42.281 "enable_placement_id": 0, 00:14:42.281 "enable_zerocopy_send_server": true, 00:14:42.281 "enable_zerocopy_send_client": false, 00:14:42.281 "zerocopy_threshold": 0, 00:14:42.281 "tls_version": 0, 00:14:42.281 "enable_ktls": false 00:14:42.281 } 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "method": "sock_impl_set_options", 00:14:42.281 "params": { 00:14:42.281 "impl_name": "posix", 00:14:42.281 "recv_buf_size": 2097152, 00:14:42.281 "send_buf_size": 2097152, 00:14:42.281 "enable_recv_pipe": true, 00:14:42.281 "enable_quickack": false, 00:14:42.281 "enable_placement_id": 0, 00:14:42.281 "enable_zerocopy_send_server": true, 00:14:42.281 "enable_zerocopy_send_client": false, 00:14:42.281 "zerocopy_threshold": 0, 00:14:42.281 "tls_version": 0, 00:14:42.281 "enable_ktls": false 00:14:42.281 } 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "method": "sock_impl_set_options", 00:14:42.281 "params": { 00:14:42.281 "impl_name": "uring", 00:14:42.281 "recv_buf_size": 2097152, 00:14:42.281 "send_buf_size": 2097152, 00:14:42.281 "enable_recv_pipe": true, 00:14:42.281 "enable_quickack": false, 00:14:42.281 "enable_placement_id": 0, 00:14:42.281 "enable_zerocopy_send_server": false, 00:14:42.281 "enable_zerocopy_send_client": false, 00:14:42.281 "zerocopy_threshold": 0, 00:14:42.281 "tls_version": 0, 00:14:42.281 "enable_ktls": false 00:14:42.281 } 00:14:42.281 } 00:14:42.281 ] 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "subsystem": "vmd", 00:14:42.281 "config": [] 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "subsystem": "accel", 00:14:42.281 "config": [ 00:14:42.281 { 00:14:42.281 "method": "accel_set_options", 00:14:42.281 "params": { 00:14:42.281 "small_cache_size": 128, 00:14:42.281 "large_cache_size": 16, 00:14:42.281 "task_count": 2048, 00:14:42.281 "sequence_count": 2048, 00:14:42.281 "buf_count": 2048 00:14:42.281 } 00:14:42.281 } 00:14:42.281 ] 00:14:42.281 }, 00:14:42.281 { 00:14:42.281 "subsystem": "bdev", 00:14:42.281 "config": [ 00:14:42.281 { 00:14:42.281 "method": "bdev_set_options", 00:14:42.282 "params": { 00:14:42.282 "bdev_io_pool_size": 65535, 00:14:42.282 "bdev_io_cache_size": 256, 00:14:42.282 "bdev_auto_examine": true, 00:14:42.282 "iobuf_small_cache_size": 128, 00:14:42.282 "iobuf_large_cache_size": 16 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_raid_set_options", 00:14:42.282 "params": { 00:14:42.282 "process_window_size_kb": 1024 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_iscsi_set_options", 00:14:42.282 "params": { 00:14:42.282 "timeout_sec": 30 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_nvme_set_options", 00:14:42.282 "params": { 00:14:42.282 "action_on_timeout": "none", 00:14:42.282 "timeout_us": 0, 00:14:42.282 "timeout_admin_us": 0, 00:14:42.282 "keep_alive_timeout_ms": 10000, 00:14:42.282 "arbitration_burst": 0, 00:14:42.282 "low_priority_weight": 0, 00:14:42.282 "medium_priority_weight": 0, 00:14:42.282 "high_priority_weight": 0, 00:14:42.282 "nvme_adminq_poll_period_us": 10000, 00:14:42.282 "nvme_ioq_poll_period_us": 0, 00:14:42.282 "io_queue_requests": 512, 00:14:42.282 "delay_cmd_submit": true, 00:14:42.282 "transport_retry_count": 4, 00:14:42.282 "bdev_retry_count": 3, 00:14:42.282 "transport_ack_timeout": 0, 00:14:42.282 "ctrlr_loss_timeout_sec": 0, 00:14:42.282 "reconnect_delay_sec": 0, 00:14:42.282 "fast_io_fail_timeout_sec": 0, 00:14:42.282 "disable_auto_failback": false, 00:14:42.282 "generate_uuids": false, 00:14:42.282 "transport_tos": 0, 00:14:42.282 "nvme_error_stat": false, 00:14:42.282 "rdma_srq_size": 0, 00:14:42.282 "io_path_stat": false, 00:14:42.282 "allow_accel_sequence": false, 00:14:42.282 "rdma_max_cq_size": 0, 00:14:42.282 "rdma_cm_event_timeout_ms": 0, 00:14:42.282 "dhchap_digests": [ 00:14:42.282 "sha256", 00:14:42.282 "sha384", 00:14:42.282 "sha512" 00:14:42.282 ], 00:14:42.282 "dhchap_dhgroups": [ 00:14:42.282 "null", 00:14:42.282 "ffdhe2048", 00:14:42.282 "ffdhe3072", 00:14:42.282 "ffdhe4096", 00:14:42.282 "ffdhe6144", 00:14:42.282 "ffdhe8192" 00:14:42.282 ] 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_nvme_attach_controller", 00:14:42.282 "params": { 00:14:42.282 "name": "TLSTEST", 00:14:42.282 "trtype": "TCP", 00:14:42.282 "adrfam": "IPv4", 00:14:42.282 "traddr": "10.0.0.2", 00:14:42.282 "trsvcid": "4420", 00:14:42.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.282 "prchk_reftag": false, 00:14:42.282 "prchk_guard": false, 00:14:42.282 "ctrlr_loss_timeout_sec": 0, 00:14:42.282 "reconnect_delay_sec": 0, 00:14:42.282 "fast_io_fail_timeout_sec": 0, 00:14:42.282 "psk": "/tmp/tmp.lLXS4YOAHK", 00:14:42.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.282 "hdgst": false, 00:14:42.282 "ddgst": false 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_nvme_set_hotplug", 00:14:42.282 "params": { 00:14:42.282 "period_us": 100000, 00:14:42.282 "enable": false 00:14:42.282 } 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "method": "bdev_wait_for_examine" 00:14:42.282 } 00:14:42.282 ] 00:14:42.282 }, 00:14:42.282 { 00:14:42.282 "subsystem": "nbd", 00:14:42.282 "config": [] 00:14:42.282 } 00:14:42.282 ] 00:14:42.282 }' 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73873 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73873 ']' 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73873 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73873 00:14:42.282 killing process with pid 73873 00:14:42.282 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.282 00:14:42.282 Latency(us) 00:14:42.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.282 =================================================================================================================== 00:14:42.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73873' 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73873 00:14:42.282 [2024-07-15 22:40:59.967533] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:42.282 22:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73873 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73824 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73824 ']' 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73824 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73824 00:14:42.539 killing process with pid 73824 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73824' 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73824 00:14:42.539 [2024-07-15 22:41:00.204975] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:42.539 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73824 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:42.798 "subsystems": [ 00:14:42.798 { 00:14:42.798 "subsystem": "keyring", 00:14:42.798 "config": [] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "iobuf", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "iobuf_set_options", 00:14:42.798 "params": { 00:14:42.798 "small_pool_count": 8192, 00:14:42.798 "large_pool_count": 1024, 00:14:42.798 "small_bufsize": 8192, 00:14:42.798 "large_bufsize": 135168 00:14:42.798 } 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "sock", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "sock_set_default_impl", 00:14:42.798 "params": { 00:14:42.798 "impl_name": "uring" 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "sock_impl_set_options", 00:14:42.798 "params": { 00:14:42.798 "impl_name": "ssl", 00:14:42.798 "recv_buf_size": 4096, 00:14:42.798 "send_buf_size": 4096, 00:14:42.798 "enable_recv_pipe": true, 00:14:42.798 "enable_quickack": false, 00:14:42.798 "enable_placement_id": 0, 00:14:42.798 "enable_zerocopy_send_server": true, 00:14:42.798 "enable_zerocopy_send_client": false, 00:14:42.798 "zerocopy_threshold": 0, 00:14:42.798 "tls_version": 0, 00:14:42.798 "enable_ktls": false 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "sock_impl_set_options", 00:14:42.798 "params": { 00:14:42.798 "impl_name": "posix", 00:14:42.798 "recv_buf_size": 2097152, 00:14:42.798 "send_buf_size": 2097152, 00:14:42.798 "enable_recv_pipe": true, 00:14:42.798 "enable_quickack": false, 00:14:42.798 "enable_placement_id": 0, 00:14:42.798 "enable_zerocopy_send_server": true, 00:14:42.798 "enable_zerocopy_send_client": false, 00:14:42.798 "zerocopy_threshold": 0, 00:14:42.798 "tls_version": 0, 00:14:42.798 "enable_ktls": false 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "sock_impl_set_options", 00:14:42.798 "params": { 00:14:42.798 "impl_name": "uring", 00:14:42.798 "recv_buf_size": 2097152, 00:14:42.798 "send_buf_size": 2097152, 00:14:42.798 "enable_recv_pipe": true, 00:14:42.798 "enable_quickack": false, 00:14:42.798 "enable_placement_id": 0, 00:14:42.798 "enable_zerocopy_send_server": false, 00:14:42.798 "enable_zerocopy_send_client": false, 00:14:42.798 "zerocopy_threshold": 0, 00:14:42.798 "tls_version": 0, 00:14:42.798 "enable_ktls": false 00:14:42.798 } 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "vmd", 00:14:42.798 "config": [] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "accel", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "accel_set_options", 00:14:42.798 "params": { 00:14:42.798 "small_cache_size": 128, 00:14:42.798 "large_cache_size": 16, 00:14:42.798 "task_count": 2048, 00:14:42.798 "sequence_count": 2048, 00:14:42.798 "buf_count": 2048 00:14:42.798 } 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "bdev", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "bdev_set_options", 00:14:42.798 "params": { 00:14:42.798 "bdev_io_pool_size": 65535, 00:14:42.798 "bdev_io_cache_size": 256, 00:14:42.798 "bdev_auto_examine": true, 00:14:42.798 "iobuf_small_cache_size": 128, 00:14:42.798 "iobuf_large_cache_size": 16 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_raid_set_options", 00:14:42.798 "params": { 00:14:42.798 "process_window_size_kb": 1024 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_iscsi_set_options", 00:14:42.798 "params": { 00:14:42.798 "timeout_sec": 30 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_nvme_set_options", 00:14:42.798 "params": { 00:14:42.798 "action_on_timeout": "none", 00:14:42.798 "timeout_us": 0, 00:14:42.798 "timeout_admin_us": 0, 00:14:42.798 "keep_alive_timeout_ms": 10000, 00:14:42.798 "arbitration_burst": 0, 00:14:42.798 "low_priority_weight": 0, 00:14:42.798 "medium_priority_weight": 0, 00:14:42.798 "high_priority_weight": 0, 00:14:42.798 "nvme_adminq_poll_period_us": 10000, 00:14:42.798 "nvme_ioq_poll_period_us": 0, 00:14:42.798 "io_queue_requests": 0, 00:14:42.798 "delay_cmd_submit": true, 00:14:42.798 "transport_retry_count": 4, 00:14:42.798 "bdev_retry_count": 3, 00:14:42.798 "transport_ack_timeout": 0, 00:14:42.798 "ctrlr_loss_timeout_sec": 0, 00:14:42.798 "reconnect_delay_sec": 0, 00:14:42.798 "fast_io_fail_timeout_sec": 0, 00:14:42.798 "disable_auto_failback": false, 00:14:42.798 "generate_uuids": false, 00:14:42.798 "transport_tos": 0, 00:14:42.798 "nvme_error_stat": false, 00:14:42.798 "rdma_srq_size": 0, 00:14:42.798 "io_path_stat": false, 00:14:42.798 "allow_accel_sequence": false, 00:14:42.798 "rdma_max_cq_size": 0, 00:14:42.798 "rdma_cm_event_timeout_ms": 0, 00:14:42.798 "dhchap_digests": [ 00:14:42.798 "sha256", 00:14:42.798 "sha384", 00:14:42.798 "sha512" 00:14:42.798 ], 00:14:42.798 "dhchap_dhgroups": [ 00:14:42.798 "null", 00:14:42.798 "ffdhe2048", 00:14:42.798 "ffdhe3072", 00:14:42.798 "ffdhe4096", 00:14:42.798 "ffdhe6144", 00:14:42.798 "ffdhe8192" 00:14:42.798 ] 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_nvme_set_hotplug", 00:14:42.798 "params": { 00:14:42.798 "period_us": 100000, 00:14:42.798 "enable": false 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_malloc_create", 00:14:42.798 "params": { 00:14:42.798 "name": "malloc0", 00:14:42.798 "num_blocks": 8192, 00:14:42.798 "block_size": 4096, 00:14:42.798 "physical_block_size": 4096, 00:14:42.798 "uuid": "3d446f12-7bdb-49f8-82da-7462020bdc86", 00:14:42.798 "optimal_io_boundary": 0 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "bdev_wait_for_examine" 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "nbd", 00:14:42.798 "config": [] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "scheduler", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "framework_set_scheduler", 00:14:42.798 "params": { 00:14:42.798 "name": "static" 00:14:42.798 } 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "subsystem": "nvmf", 00:14:42.798 "config": [ 00:14:42.798 { 00:14:42.798 "method": "nvmf_set_config", 00:14:42.798 "params": { 00:14:42.798 "discovery_filter": "match_any", 00:14:42.798 "admin_cmd_passthru": { 00:14:42.798 "identify_ctrlr": false 00:14:42.798 } 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_set_max_subsystems", 00:14:42.798 "params": { 00:14:42.798 "max_subsystems": 1024 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_set_crdt", 00:14:42.798 "params": { 00:14:42.798 "crdt1": 0, 00:14:42.798 "crdt2": 0, 00:14:42.798 "crdt3": 0 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_create_transport", 00:14:42.798 "params": { 00:14:42.798 "trtype": "TCP", 00:14:42.798 "max_queue_depth": 128, 00:14:42.798 "max_io_qpairs_per_ctrlr": 127, 00:14:42.798 "in_capsule_data_size": 4096, 00:14:42.798 "max_io_size": 131072, 00:14:42.798 "io_unit_size": 131072, 00:14:42.798 "max_aq_depth": 128, 00:14:42.798 "num_shared_buffers": 511, 00:14:42.798 "buf_cache_size": 4294967295, 00:14:42.798 "dif_insert_or_strip": false, 00:14:42.798 "zcopy": false, 00:14:42.798 "c2h_success": false, 00:14:42.798 "sock_priority": 0, 00:14:42.798 "abort_timeout_sec": 1, 00:14:42.798 "ack_timeout": 0, 00:14:42.798 "data_wr_pool_size": 0 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_create_subsystem", 00:14:42.798 "params": { 00:14:42.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.798 "allow_any_host": false, 00:14:42.798 "serial_number": "SPDK00000000000001", 00:14:42.798 "model_number": "SPDK bdev Controller", 00:14:42.798 "max_namespaces": 10, 00:14:42.798 "min_cntlid": 1, 00:14:42.798 "max_cntlid": 65519, 00:14:42.798 "ana_reporting": false 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_subsystem_add_host", 00:14:42.798 "params": { 00:14:42.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.798 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.798 "psk": "/tmp/tmp.lLXS4YOAHK" 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_subsystem_add_ns", 00:14:42.798 "params": { 00:14:42.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.798 "namespace": { 00:14:42.798 "nsid": 1, 00:14:42.798 "bdev_name": "malloc0", 00:14:42.798 "nguid": "3D446F127BDB49F882DA7462020BDC86", 00:14:42.798 "uuid": "3d446f12-7bdb-49f8-82da-7462020bdc86", 00:14:42.798 "no_auto_visible": false 00:14:42.798 } 00:14:42.798 } 00:14:42.798 }, 00:14:42.798 { 00:14:42.798 "method": "nvmf_subsystem_add_listener", 00:14:42.798 "params": { 00:14:42.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.798 "listen_address": { 00:14:42.798 "trtype": "TCP", 00:14:42.798 "adrfam": "IPv4", 00:14:42.798 "traddr": "10.0.0.2", 00:14:42.798 "trsvcid": "4420" 00:14:42.798 }, 00:14:42.798 "secure_channel": true 00:14:42.798 } 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 } 00:14:42.798 ] 00:14:42.798 }' 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73922 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73922 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73922 ']' 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.798 22:41:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.798 [2024-07-15 22:41:00.508006] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:42.798 [2024-07-15 22:41:00.508314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.056 [2024-07-15 22:41:00.647817] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.056 [2024-07-15 22:41:00.746456] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.056 [2024-07-15 22:41:00.746515] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.056 [2024-07-15 22:41:00.746527] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.056 [2024-07-15 22:41:00.746536] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.056 [2024-07-15 22:41:00.746544] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.056 [2024-07-15 22:41:00.746635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.315 [2024-07-15 22:41:00.913321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.315 [2024-07-15 22:41:00.981252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.315 [2024-07-15 22:41:00.997151] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:43.315 [2024-07-15 22:41:01.013164] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:43.315 [2024-07-15 22:41:01.013368] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73954 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73954 /var/tmp/bdevperf.sock 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73954 ']' 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:43.881 22:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:43.881 "subsystems": [ 00:14:43.881 { 00:14:43.881 "subsystem": "keyring", 00:14:43.881 "config": [] 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "subsystem": "iobuf", 00:14:43.881 "config": [ 00:14:43.881 { 00:14:43.881 "method": "iobuf_set_options", 00:14:43.881 "params": { 00:14:43.881 "small_pool_count": 8192, 00:14:43.881 "large_pool_count": 1024, 00:14:43.881 "small_bufsize": 8192, 00:14:43.881 "large_bufsize": 135168 00:14:43.881 } 00:14:43.881 } 00:14:43.881 ] 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "subsystem": "sock", 00:14:43.881 "config": [ 00:14:43.881 { 00:14:43.881 "method": "sock_set_default_impl", 00:14:43.881 "params": { 00:14:43.881 "impl_name": "uring" 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "sock_impl_set_options", 00:14:43.881 "params": { 00:14:43.881 "impl_name": "ssl", 00:14:43.881 "recv_buf_size": 4096, 00:14:43.881 "send_buf_size": 4096, 00:14:43.881 "enable_recv_pipe": true, 00:14:43.881 "enable_quickack": false, 00:14:43.881 "enable_placement_id": 0, 00:14:43.881 "enable_zerocopy_send_server": true, 00:14:43.881 "enable_zerocopy_send_client": false, 00:14:43.881 "zerocopy_threshold": 0, 00:14:43.881 "tls_version": 0, 00:14:43.881 "enable_ktls": false 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "sock_impl_set_options", 00:14:43.881 "params": { 00:14:43.881 "impl_name": "posix", 00:14:43.881 "recv_buf_size": 2097152, 00:14:43.881 "send_buf_size": 2097152, 00:14:43.881 "enable_recv_pipe": true, 00:14:43.881 "enable_quickack": false, 00:14:43.881 "enable_placement_id": 0, 00:14:43.881 "enable_zerocopy_send_server": true, 00:14:43.881 "enable_zerocopy_send_client": false, 00:14:43.881 "zerocopy_threshold": 0, 00:14:43.881 "tls_version": 0, 00:14:43.881 "enable_ktls": false 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "sock_impl_set_options", 00:14:43.881 "params": { 00:14:43.881 "impl_name": "uring", 00:14:43.881 "recv_buf_size": 2097152, 00:14:43.881 "send_buf_size": 2097152, 00:14:43.881 "enable_recv_pipe": true, 00:14:43.881 "enable_quickack": false, 00:14:43.881 "enable_placement_id": 0, 00:14:43.881 "enable_zerocopy_send_server": false, 00:14:43.881 "enable_zerocopy_send_client": false, 00:14:43.881 "zerocopy_threshold": 0, 00:14:43.881 "tls_version": 0, 00:14:43.881 "enable_ktls": false 00:14:43.881 } 00:14:43.881 } 00:14:43.881 ] 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "subsystem": "vmd", 00:14:43.881 "config": [] 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "subsystem": "accel", 00:14:43.881 "config": [ 00:14:43.881 { 00:14:43.881 "method": "accel_set_options", 00:14:43.881 "params": { 00:14:43.881 "small_cache_size": 128, 00:14:43.881 "large_cache_size": 16, 00:14:43.881 "task_count": 2048, 00:14:43.881 "sequence_count": 2048, 00:14:43.881 "buf_count": 2048 00:14:43.881 } 00:14:43.881 } 00:14:43.881 ] 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "subsystem": "bdev", 00:14:43.881 "config": [ 00:14:43.881 { 00:14:43.881 "method": "bdev_set_options", 00:14:43.881 "params": { 00:14:43.881 "bdev_io_pool_size": 65535, 00:14:43.881 "bdev_io_cache_size": 256, 00:14:43.881 "bdev_auto_examine": true, 00:14:43.881 "iobuf_small_cache_size": 128, 00:14:43.881 "iobuf_large_cache_size": 16 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "bdev_raid_set_options", 00:14:43.881 "params": { 00:14:43.881 "process_window_size_kb": 1024 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "bdev_iscsi_set_options", 00:14:43.881 "params": { 00:14:43.881 "timeout_sec": 30 00:14:43.881 } 00:14:43.881 }, 00:14:43.881 { 00:14:43.881 "method": "bdev_nvme_set_options", 00:14:43.881 "params": { 00:14:43.881 "action_on_timeout": "none", 00:14:43.881 "timeout_us": 0, 00:14:43.881 "timeout_admin_us": 0, 00:14:43.882 "keep_alive_timeout_ms": 10000, 00:14:43.882 "arbitration_burst": 0, 00:14:43.882 "low_priority_weight": 0, 00:14:43.882 "medium_priority_weight": 0, 00:14:43.882 "high_priority_weight": 0, 00:14:43.882 "nvme_adminq_poll_period_us": 10000, 00:14:43.882 "nvme_ioq_poll_period_us": 0, 00:14:43.882 "io_queue_requests": 512, 00:14:43.882 "delay_cmd_submit": true, 00:14:43.882 "transport_retry_count": 4, 00:14:43.882 "bdev_retry_count": 3, 00:14:43.882 "transport_ack_timeout": 0, 00:14:43.882 "ctrlr_loss_timeout_sec": 0, 00:14:43.882 "reconnect_delay_sec": 0, 00:14:43.882 "fast_io_fail_timeout_sec": 0, 00:14:43.882 "disable_auto_failback": false, 00:14:43.882 "generate_uuids": false, 00:14:43.882 "transport_tos": 0, 00:14:43.882 "nvme_error_stat": false, 00:14:43.882 "rdma_srq_size": 0, 00:14:43.882 "io_path_stat": false, 00:14:43.882 "allow_accel_sequence": false, 00:14:43.882 "rdma_max_cq_size": 0, 00:14:43.882 "rdma_cm_event_timeout_ms": 0, 00:14:43.882 "dhchap_digests": [ 00:14:43.882 "sha256", 00:14:43.882 "sha384", 00:14:43.882 "sha512" 00:14:43.882 ], 00:14:43.882 "dhchap_dhgroups": [ 00:14:43.882 "null", 00:14:43.882 "ffdhe2048", 00:14:43.882 "ffdhe3072", 00:14:43.882 "ffdhe4096", 00:14:43.882 "ffdhe6144", 00:14:43.882 "ffdhe8192" 00:14:43.882 ] 00:14:43.882 } 00:14:43.882 }, 00:14:43.882 { 00:14:43.882 "method": "bdev_nvme_attach_controller", 00:14:43.882 "params": { 00:14:43.882 "name": "TLSTEST", 00:14:43.882 "trtype": "TCP", 00:14:43.882 "adrfam": "IPv4", 00:14:43.882 "traddr": "10.0.0.2", 00:14:43.882 "trsvcid": "4420", 00:14:43.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.882 "prchk_reftag": false, 00:14:43.882 "prchk_guard": false, 00:14:43.882 "ctrlr_loss_timeout_sec": 0, 00:14:43.882 "reconnect_delay_sec": 0, 00:14:43.882 "fast_io_fail_timeout_sec": 0, 00:14:43.882 "psk": "/tmp/tmp.lLXS4YOAHK", 00:14:43.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.882 "hdgst": false, 00:14:43.882 "ddgst": false 00:14:43.882 } 00:14:43.882 }, 00:14:43.882 { 00:14:43.882 "method": "bdev_nvme_set_hotplug", 00:14:43.882 "params": { 00:14:43.882 "period_us": 100000, 00:14:43.882 "enable": false 00:14:43.882 } 00:14:43.882 }, 00:14:43.882 { 00:14:43.882 "method": "bdev_wait_for_examine" 00:14:43.882 } 00:14:43.882 ] 00:14:43.882 }, 00:14:43.882 { 00:14:43.882 "subsystem": "nbd", 00:14:43.882 "config": [] 00:14:43.882 } 00:14:43.882 ] 00:14:43.882 }' 00:14:43.882 [2024-07-15 22:41:01.552130] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:43.882 [2024-07-15 22:41:01.552500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73954 ] 00:14:43.882 [2024-07-15 22:41:01.695654] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.140 [2024-07-15 22:41:01.807016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.140 [2024-07-15 22:41:01.942533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.398 [2024-07-15 22:41:01.979617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.398 [2024-07-15 22:41:01.980095] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:44.963 22:41:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.963 22:41:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:44.963 22:41:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:44.963 Running I/O for 10 seconds... 00:14:54.942 00:14:54.942 Latency(us) 00:14:54.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.942 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:54.942 Verification LBA range: start 0x0 length 0x2000 00:14:54.942 TLSTESTn1 : 10.02 3957.14 15.46 0.00 0.00 32282.31 7804.74 34317.03 00:14:54.942 =================================================================================================================== 00:14:54.942 Total : 3957.14 15.46 0.00 0.00 32282.31 7804.74 34317.03 00:14:54.942 0 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73954 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73954 ']' 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73954 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73954 00:14:54.942 killing process with pid 73954 00:14:54.942 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.942 00:14:54.942 Latency(us) 00:14:54.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.942 =================================================================================================================== 00:14:54.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73954' 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73954 00:14:54.942 [2024-07-15 22:41:12.732212] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.942 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73954 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73922 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73922 ']' 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73922 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73922 00:14:55.202 killing process with pid 73922 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73922' 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73922 00:14:55.202 [2024-07-15 22:41:12.985101] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:55.202 22:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73922 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74087 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74087 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74087 ']' 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.461 22:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.720 [2024-07-15 22:41:13.315912] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:55.720 [2024-07-15 22:41:13.316334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.720 [2024-07-15 22:41:13.468296] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.977 [2024-07-15 22:41:13.627029] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.977 [2024-07-15 22:41:13.627101] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.977 [2024-07-15 22:41:13.627125] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.977 [2024-07-15 22:41:13.627136] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.977 [2024-07-15 22:41:13.627146] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.978 [2024-07-15 22:41:13.627178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.978 [2024-07-15 22:41:13.702317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lLXS4YOAHK 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lLXS4YOAHK 00:14:56.542 22:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:56.800 [2024-07-15 22:41:14.564913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.800 22:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.057 22:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:57.622 [2024-07-15 22:41:15.185073] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.622 [2024-07-15 22:41:15.185344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.623 22:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:57.880 malloc0 00:14:57.880 22:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.156 22:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lLXS4YOAHK 00:14:58.414 [2024-07-15 22:41:16.127572] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74147 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:58.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74147 /var/tmp/bdevperf.sock 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74147 ']' 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.414 22:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.414 [2024-07-15 22:41:16.210402] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:58.414 [2024-07-15 22:41:16.211044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74147 ] 00:14:58.672 [2024-07-15 22:41:16.354650] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.672 [2024-07-15 22:41:16.483599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.929 [2024-07-15 22:41:16.540494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.496 22:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.496 22:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:59.496 22:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lLXS4YOAHK 00:14:59.756 22:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:00.014 [2024-07-15 22:41:17.698105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.014 nvme0n1 00:15:00.015 22:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.273 Running I/O for 1 seconds... 00:15:01.209 00:15:01.209 Latency(us) 00:15:01.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.209 Verification LBA range: start 0x0 length 0x2000 00:15:01.209 nvme0n1 : 1.03 3610.96 14.11 0.00 0.00 35031.06 9592.09 22997.18 00:15:01.209 =================================================================================================================== 00:15:01.209 Total : 3610.96 14.11 0.00 0.00 35031.06 9592.09 22997.18 00:15:01.209 0 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74147 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74147 ']' 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74147 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74147 00:15:01.209 killing process with pid 74147 00:15:01.209 Received shutdown signal, test time was about 1.000000 seconds 00:15:01.209 00:15:01.209 Latency(us) 00:15:01.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.209 =================================================================================================================== 00:15:01.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74147' 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74147 00:15:01.209 22:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74147 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74087 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74087 ']' 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74087 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74087 00:15:01.468 killing process with pid 74087 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74087' 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74087 00:15:01.468 [2024-07-15 22:41:19.248054] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:01.468 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74087 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74198 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74198 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74198 ']' 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.035 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.035 [2024-07-15 22:41:19.641410] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:02.035 [2024-07-15 22:41:19.641893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.035 [2024-07-15 22:41:19.785010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.294 [2024-07-15 22:41:19.908594] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.294 [2024-07-15 22:41:19.908937] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.294 [2024-07-15 22:41:19.909115] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.294 [2024-07-15 22:41:19.909284] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.294 [2024-07-15 22:41:19.909304] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.294 [2024-07-15 22:41:19.909338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.294 [2024-07-15 22:41:19.970161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.859 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.859 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.859 22:41:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.859 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.859 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.118 [2024-07-15 22:41:20.722732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.118 malloc0 00:15:03.118 [2024-07-15 22:41:20.757562] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.118 [2024-07-15 22:41:20.757833] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=74231 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 74231 /var/tmp/bdevperf.sock 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74231 ']' 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.118 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.118 [2024-07-15 22:41:20.833451] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:03.118 [2024-07-15 22:41:20.833762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74231 ] 00:15:03.376 [2024-07-15 22:41:20.967439] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.376 [2024-07-15 22:41:21.083518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.376 [2024-07-15 22:41:21.138848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:04.311 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.311 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:04.311 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lLXS4YOAHK 00:15:04.311 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:04.570 [2024-07-15 22:41:22.359473] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.829 nvme0n1 00:15:04.829 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.829 Running I/O for 1 seconds... 00:15:05.794 00:15:05.794 Latency(us) 00:15:05.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.794 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:05.794 Verification LBA range: start 0x0 length 0x2000 00:15:05.794 nvme0n1 : 1.03 3710.38 14.49 0.00 0.00 34069.58 7626.01 21209.83 00:15:05.794 =================================================================================================================== 00:15:05.794 Total : 3710.38 14.49 0.00 0.00 34069.58 7626.01 21209.83 00:15:05.794 0 00:15:06.052 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:06.052 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.052 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.052 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.052 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:06.052 "subsystems": [ 00:15:06.052 { 00:15:06.052 "subsystem": "keyring", 00:15:06.052 "config": [ 00:15:06.052 { 00:15:06.052 "method": "keyring_file_add_key", 00:15:06.052 "params": { 00:15:06.052 "name": "key0", 00:15:06.052 "path": "/tmp/tmp.lLXS4YOAHK" 00:15:06.052 } 00:15:06.052 } 00:15:06.052 ] 00:15:06.052 }, 00:15:06.052 { 00:15:06.052 "subsystem": "iobuf", 00:15:06.052 "config": [ 00:15:06.052 { 00:15:06.052 "method": "iobuf_set_options", 00:15:06.052 "params": { 00:15:06.052 "small_pool_count": 8192, 00:15:06.052 "large_pool_count": 1024, 00:15:06.052 "small_bufsize": 8192, 00:15:06.052 "large_bufsize": 135168 00:15:06.052 } 00:15:06.052 } 00:15:06.052 ] 00:15:06.052 }, 00:15:06.052 { 00:15:06.052 "subsystem": "sock", 00:15:06.052 "config": [ 00:15:06.052 { 00:15:06.052 "method": "sock_set_default_impl", 00:15:06.052 "params": { 00:15:06.052 "impl_name": "uring" 00:15:06.052 } 00:15:06.052 }, 00:15:06.052 { 00:15:06.052 "method": "sock_impl_set_options", 00:15:06.052 "params": { 00:15:06.052 "impl_name": "ssl", 00:15:06.052 "recv_buf_size": 4096, 00:15:06.052 "send_buf_size": 4096, 00:15:06.052 "enable_recv_pipe": true, 00:15:06.052 "enable_quickack": false, 00:15:06.052 "enable_placement_id": 0, 00:15:06.052 "enable_zerocopy_send_server": true, 00:15:06.052 "enable_zerocopy_send_client": false, 00:15:06.052 "zerocopy_threshold": 0, 00:15:06.052 "tls_version": 0, 00:15:06.052 "enable_ktls": false 00:15:06.052 } 00:15:06.052 }, 00:15:06.052 { 00:15:06.052 "method": "sock_impl_set_options", 00:15:06.052 "params": { 00:15:06.052 "impl_name": "posix", 00:15:06.053 "recv_buf_size": 2097152, 00:15:06.053 "send_buf_size": 2097152, 00:15:06.053 "enable_recv_pipe": true, 00:15:06.053 "enable_quickack": false, 00:15:06.053 "enable_placement_id": 0, 00:15:06.053 "enable_zerocopy_send_server": true, 00:15:06.053 "enable_zerocopy_send_client": false, 00:15:06.053 "zerocopy_threshold": 0, 00:15:06.053 "tls_version": 0, 00:15:06.053 "enable_ktls": false 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "sock_impl_set_options", 00:15:06.053 "params": { 00:15:06.053 "impl_name": "uring", 00:15:06.053 "recv_buf_size": 2097152, 00:15:06.053 "send_buf_size": 2097152, 00:15:06.053 "enable_recv_pipe": true, 00:15:06.053 "enable_quickack": false, 00:15:06.053 "enable_placement_id": 0, 00:15:06.053 "enable_zerocopy_send_server": false, 00:15:06.053 "enable_zerocopy_send_client": false, 00:15:06.053 "zerocopy_threshold": 0, 00:15:06.053 "tls_version": 0, 00:15:06.053 "enable_ktls": false 00:15:06.053 } 00:15:06.053 } 00:15:06.053 ] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "vmd", 00:15:06.053 "config": [] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "accel", 00:15:06.053 "config": [ 00:15:06.053 { 00:15:06.053 "method": "accel_set_options", 00:15:06.053 "params": { 00:15:06.053 "small_cache_size": 128, 00:15:06.053 "large_cache_size": 16, 00:15:06.053 "task_count": 2048, 00:15:06.053 "sequence_count": 2048, 00:15:06.053 "buf_count": 2048 00:15:06.053 } 00:15:06.053 } 00:15:06.053 ] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "bdev", 00:15:06.053 "config": [ 00:15:06.053 { 00:15:06.053 "method": "bdev_set_options", 00:15:06.053 "params": { 00:15:06.053 "bdev_io_pool_size": 65535, 00:15:06.053 "bdev_io_cache_size": 256, 00:15:06.053 "bdev_auto_examine": true, 00:15:06.053 "iobuf_small_cache_size": 128, 00:15:06.053 "iobuf_large_cache_size": 16 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_raid_set_options", 00:15:06.053 "params": { 00:15:06.053 "process_window_size_kb": 1024 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_iscsi_set_options", 00:15:06.053 "params": { 00:15:06.053 "timeout_sec": 30 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_nvme_set_options", 00:15:06.053 "params": { 00:15:06.053 "action_on_timeout": "none", 00:15:06.053 "timeout_us": 0, 00:15:06.053 "timeout_admin_us": 0, 00:15:06.053 "keep_alive_timeout_ms": 10000, 00:15:06.053 "arbitration_burst": 0, 00:15:06.053 "low_priority_weight": 0, 00:15:06.053 "medium_priority_weight": 0, 00:15:06.053 "high_priority_weight": 0, 00:15:06.053 "nvme_adminq_poll_period_us": 10000, 00:15:06.053 "nvme_ioq_poll_period_us": 0, 00:15:06.053 "io_queue_requests": 0, 00:15:06.053 "delay_cmd_submit": true, 00:15:06.053 "transport_retry_count": 4, 00:15:06.053 "bdev_retry_count": 3, 00:15:06.053 "transport_ack_timeout": 0, 00:15:06.053 "ctrlr_loss_timeout_sec": 0, 00:15:06.053 "reconnect_delay_sec": 0, 00:15:06.053 "fast_io_fail_timeout_sec": 0, 00:15:06.053 "disable_auto_failback": false, 00:15:06.053 "generate_uuids": false, 00:15:06.053 "transport_tos": 0, 00:15:06.053 "nvme_error_stat": false, 00:15:06.053 "rdma_srq_size": 0, 00:15:06.053 "io_path_stat": false, 00:15:06.053 "allow_accel_sequence": false, 00:15:06.053 "rdma_max_cq_size": 0, 00:15:06.053 "rdma_cm_event_timeout_ms": 0, 00:15:06.053 "dhchap_digests": [ 00:15:06.053 "sha256", 00:15:06.053 "sha384", 00:15:06.053 "sha512" 00:15:06.053 ], 00:15:06.053 "dhchap_dhgroups": [ 00:15:06.053 "null", 00:15:06.053 "ffdhe2048", 00:15:06.053 "ffdhe3072", 00:15:06.053 "ffdhe4096", 00:15:06.053 "ffdhe6144", 00:15:06.053 "ffdhe8192" 00:15:06.053 ] 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_nvme_set_hotplug", 00:15:06.053 "params": { 00:15:06.053 "period_us": 100000, 00:15:06.053 "enable": false 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_malloc_create", 00:15:06.053 "params": { 00:15:06.053 "name": "malloc0", 00:15:06.053 "num_blocks": 8192, 00:15:06.053 "block_size": 4096, 00:15:06.053 "physical_block_size": 4096, 00:15:06.053 "uuid": "3b2452cb-340a-4dc6-bcb4-5f4e172281e2", 00:15:06.053 "optimal_io_boundary": 0 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "bdev_wait_for_examine" 00:15:06.053 } 00:15:06.053 ] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "nbd", 00:15:06.053 "config": [] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "scheduler", 00:15:06.053 "config": [ 00:15:06.053 { 00:15:06.053 "method": "framework_set_scheduler", 00:15:06.053 "params": { 00:15:06.053 "name": "static" 00:15:06.053 } 00:15:06.053 } 00:15:06.053 ] 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "subsystem": "nvmf", 00:15:06.053 "config": [ 00:15:06.053 { 00:15:06.053 "method": "nvmf_set_config", 00:15:06.053 "params": { 00:15:06.053 "discovery_filter": "match_any", 00:15:06.053 "admin_cmd_passthru": { 00:15:06.053 "identify_ctrlr": false 00:15:06.053 } 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "nvmf_set_max_subsystems", 00:15:06.053 "params": { 00:15:06.053 "max_subsystems": 1024 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "nvmf_set_crdt", 00:15:06.053 "params": { 00:15:06.053 "crdt1": 0, 00:15:06.053 "crdt2": 0, 00:15:06.053 "crdt3": 0 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "nvmf_create_transport", 00:15:06.053 "params": { 00:15:06.053 "trtype": "TCP", 00:15:06.053 "max_queue_depth": 128, 00:15:06.053 "max_io_qpairs_per_ctrlr": 127, 00:15:06.053 "in_capsule_data_size": 4096, 00:15:06.053 "max_io_size": 131072, 00:15:06.053 "io_unit_size": 131072, 00:15:06.053 "max_aq_depth": 128, 00:15:06.053 "num_shared_buffers": 511, 00:15:06.053 "buf_cache_size": 4294967295, 00:15:06.053 "dif_insert_or_strip": false, 00:15:06.053 "zcopy": false, 00:15:06.053 "c2h_success": false, 00:15:06.053 "sock_priority": 0, 00:15:06.053 "abort_timeout_sec": 1, 00:15:06.053 "ack_timeout": 0, 00:15:06.053 "data_wr_pool_size": 0 00:15:06.053 } 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "method": "nvmf_create_subsystem", 00:15:06.054 "params": { 00:15:06.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.054 "allow_any_host": false, 00:15:06.054 "serial_number": "00000000000000000000", 00:15:06.054 "model_number": "SPDK bdev Controller", 00:15:06.054 "max_namespaces": 32, 00:15:06.054 "min_cntlid": 1, 00:15:06.054 "max_cntlid": 65519, 00:15:06.054 "ana_reporting": false 00:15:06.054 } 00:15:06.054 }, 00:15:06.054 { 00:15:06.054 "method": "nvmf_subsystem_add_host", 00:15:06.054 "params": { 00:15:06.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.054 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.054 "psk": "key0" 00:15:06.054 } 00:15:06.054 }, 00:15:06.054 { 00:15:06.054 "method": "nvmf_subsystem_add_ns", 00:15:06.054 "params": { 00:15:06.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.054 "namespace": { 00:15:06.054 "nsid": 1, 00:15:06.054 "bdev_name": "malloc0", 00:15:06.054 "nguid": "3B2452CB340A4DC6BCB45F4E172281E2", 00:15:06.054 "uuid": "3b2452cb-340a-4dc6-bcb4-5f4e172281e2", 00:15:06.054 "no_auto_visible": false 00:15:06.054 } 00:15:06.054 } 00:15:06.054 }, 00:15:06.054 { 00:15:06.054 "method": "nvmf_subsystem_add_listener", 00:15:06.054 "params": { 00:15:06.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.054 "listen_address": { 00:15:06.054 "trtype": "TCP", 00:15:06.054 "adrfam": "IPv4", 00:15:06.054 "traddr": "10.0.0.2", 00:15:06.054 "trsvcid": "4420" 00:15:06.054 }, 00:15:06.054 "secure_channel": false, 00:15:06.054 "sock_impl": "ssl" 00:15:06.054 } 00:15:06.054 } 00:15:06.054 ] 00:15:06.054 } 00:15:06.054 ] 00:15:06.054 }' 00:15:06.054 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:06.312 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:06.312 "subsystems": [ 00:15:06.312 { 00:15:06.312 "subsystem": "keyring", 00:15:06.312 "config": [ 00:15:06.312 { 00:15:06.312 "method": "keyring_file_add_key", 00:15:06.312 "params": { 00:15:06.312 "name": "key0", 00:15:06.312 "path": "/tmp/tmp.lLXS4YOAHK" 00:15:06.312 } 00:15:06.312 } 00:15:06.312 ] 00:15:06.312 }, 00:15:06.312 { 00:15:06.312 "subsystem": "iobuf", 00:15:06.312 "config": [ 00:15:06.312 { 00:15:06.312 "method": "iobuf_set_options", 00:15:06.312 "params": { 00:15:06.312 "small_pool_count": 8192, 00:15:06.312 "large_pool_count": 1024, 00:15:06.312 "small_bufsize": 8192, 00:15:06.312 "large_bufsize": 135168 00:15:06.312 } 00:15:06.312 } 00:15:06.312 ] 00:15:06.312 }, 00:15:06.312 { 00:15:06.312 "subsystem": "sock", 00:15:06.312 "config": [ 00:15:06.312 { 00:15:06.312 "method": "sock_set_default_impl", 00:15:06.312 "params": { 00:15:06.312 "impl_name": "uring" 00:15:06.312 } 00:15:06.312 }, 00:15:06.313 { 00:15:06.313 "method": "sock_impl_set_options", 00:15:06.313 "params": { 00:15:06.313 "impl_name": "ssl", 00:15:06.313 "recv_buf_size": 4096, 00:15:06.313 "send_buf_size": 4096, 00:15:06.313 "enable_recv_pipe": true, 00:15:06.313 "enable_quickack": false, 00:15:06.313 "enable_placement_id": 0, 00:15:06.313 "enable_zerocopy_send_server": true, 00:15:06.313 "enable_zerocopy_send_client": false, 00:15:06.313 "zerocopy_threshold": 0, 00:15:06.313 "tls_version": 0, 00:15:06.313 "enable_ktls": false 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "sock_impl_set_options", 00:15:06.313 "params": { 00:15:06.313 "impl_name": "posix", 00:15:06.313 "recv_buf_size": 2097152, 00:15:06.313 "send_buf_size": 2097152, 00:15:06.313 "enable_recv_pipe": true, 00:15:06.313 "enable_quickack": false, 00:15:06.313 "enable_placement_id": 0, 00:15:06.313 "enable_zerocopy_send_server": true, 00:15:06.313 "enable_zerocopy_send_client": false, 00:15:06.313 "zerocopy_threshold": 0, 00:15:06.313 "tls_version": 0, 00:15:06.313 "enable_ktls": false 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "sock_impl_set_options", 00:15:06.313 "params": { 00:15:06.313 "impl_name": "uring", 00:15:06.313 "recv_buf_size": 2097152, 00:15:06.313 "send_buf_size": 2097152, 00:15:06.313 "enable_recv_pipe": true, 00:15:06.313 "enable_quickack": false, 00:15:06.313 "enable_placement_id": 0, 00:15:06.313 "enable_zerocopy_send_server": false, 00:15:06.313 "enable_zerocopy_send_client": false, 00:15:06.313 "zerocopy_threshold": 0, 00:15:06.313 "tls_version": 0, 00:15:06.313 "enable_ktls": false 00:15:06.313 } 00:15:06.313 } 00:15:06.313 ] 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "subsystem": "vmd", 00:15:06.313 "config": [] 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "subsystem": "accel", 00:15:06.313 "config": [ 00:15:06.313 { 00:15:06.313 "method": "accel_set_options", 00:15:06.313 "params": { 00:15:06.313 "small_cache_size": 128, 00:15:06.313 "large_cache_size": 16, 00:15:06.313 "task_count": 2048, 00:15:06.313 "sequence_count": 2048, 00:15:06.313 "buf_count": 2048 00:15:06.313 } 00:15:06.313 } 00:15:06.313 ] 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "subsystem": "bdev", 00:15:06.313 "config": [ 00:15:06.313 { 00:15:06.313 "method": "bdev_set_options", 00:15:06.313 "params": { 00:15:06.313 "bdev_io_pool_size": 65535, 00:15:06.313 "bdev_io_cache_size": 256, 00:15:06.313 "bdev_auto_examine": true, 00:15:06.313 "iobuf_small_cache_size": 128, 00:15:06.313 "iobuf_large_cache_size": 16 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "bdev_raid_set_options", 00:15:06.313 "params": { 00:15:06.313 "process_window_size_kb": 1024 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "bdev_iscsi_set_options", 00:15:06.313 "params": { 00:15:06.313 "timeout_sec": 30 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "bdev_nvme_set_options", 00:15:06.313 "params": { 00:15:06.313 "action_on_timeout": "none", 00:15:06.313 "timeout_us": 0, 00:15:06.313 "timeout_admin_us": 0, 00:15:06.313 "keep_alive_timeout_ms": 10000, 00:15:06.313 "arbitration_burst": 0, 00:15:06.313 "low_priority_weight": 0, 00:15:06.313 "medium_priority_weight": 0, 00:15:06.313 "high_priority_weight": 0, 00:15:06.313 "nvme_adminq_poll_period_us": 10000, 00:15:06.313 "nvme_ioq_poll_period_us": 0, 00:15:06.313 "io_queue_requests": 512, 00:15:06.313 "delay_cmd_submit": true, 00:15:06.313 "transport_retry_count": 4, 00:15:06.313 "bdev_retry_count": 3, 00:15:06.313 "transport_ack_timeout": 0, 00:15:06.313 "ctrlr_loss_timeout_sec": 0, 00:15:06.313 "reconnect_delay_sec": 0, 00:15:06.313 "fast_io_fail_timeout_sec": 0, 00:15:06.313 "disable_auto_failback": false, 00:15:06.313 "generate_uuids": false, 00:15:06.313 "transport_tos": 0, 00:15:06.313 "nvme_error_stat": false, 00:15:06.313 "rdma_srq_size": 0, 00:15:06.313 "io_path_stat": false, 00:15:06.313 "allow_accel_sequence": false, 00:15:06.313 "rdma_max_cq_size": 0, 00:15:06.313 "rdma_cm_event_timeout_ms": 0, 00:15:06.313 "dhchap_digests": [ 00:15:06.313 "sha256", 00:15:06.313 "sha384", 00:15:06.313 "sha512" 00:15:06.313 ], 00:15:06.313 "dhchap_dhgroups": [ 00:15:06.313 "null", 00:15:06.313 "ffdhe2048", 00:15:06.313 "ffdhe3072", 00:15:06.313 "ffdhe4096", 00:15:06.313 "ffdhe6144", 00:15:06.313 "ffdhe8192" 00:15:06.313 ] 00:15:06.313 } 00:15:06.313 }, 00:15:06.313 { 00:15:06.313 "method": "bdev_nvme_attach_controller", 00:15:06.313 "params": { 00:15:06.313 "name": "nvme0", 00:15:06.313 "trtype": "TCP", 00:15:06.313 "adrfam": "IPv4", 00:15:06.313 "traddr": "10.0.0.2", 00:15:06.313 "trsvcid": "4420", 00:15:06.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.313 "prchk_reftag": false, 00:15:06.314 "prchk_guard": false, 00:15:06.314 "ctrlr_loss_timeout_sec": 0, 00:15:06.314 "reconnect_delay_sec": 0, 00:15:06.314 "fast_io_fail_timeout_sec": 0, 00:15:06.314 "psk": "key0", 00:15:06.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.314 "hdgst": false, 00:15:06.314 "ddgst": false 00:15:06.314 } 00:15:06.314 }, 00:15:06.314 { 00:15:06.314 "method": "bdev_nvme_set_hotplug", 00:15:06.314 "params": { 00:15:06.314 "period_us": 100000, 00:15:06.314 "enable": false 00:15:06.314 } 00:15:06.314 }, 00:15:06.314 { 00:15:06.314 "method": "bdev_enable_histogram", 00:15:06.314 "params": { 00:15:06.314 "name": "nvme0n1", 00:15:06.314 "enable": true 00:15:06.314 } 00:15:06.314 }, 00:15:06.314 { 00:15:06.314 "method": "bdev_wait_for_examine" 00:15:06.314 } 00:15:06.314 ] 00:15:06.314 }, 00:15:06.314 { 00:15:06.314 "subsystem": "nbd", 00:15:06.314 "config": [] 00:15:06.314 } 00:15:06.314 ] 00:15:06.314 }' 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 74231 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74231 ']' 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74231 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74231 00:15:06.314 killing process with pid 74231 00:15:06.314 Received shutdown signal, test time was about 1.000000 seconds 00:15:06.314 00:15:06.314 Latency(us) 00:15:06.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.314 =================================================================================================================== 00:15:06.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74231' 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74231 00:15:06.314 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74231 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 74198 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74198 ']' 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74198 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74198 00:15:06.572 killing process with pid 74198 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74198' 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74198 00:15:06.572 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74198 00:15:07.140 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:07.141 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:07.141 "subsystems": [ 00:15:07.141 { 00:15:07.141 "subsystem": "keyring", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "keyring_file_add_key", 00:15:07.141 "params": { 00:15:07.141 "name": "key0", 00:15:07.141 "path": "/tmp/tmp.lLXS4YOAHK" 00:15:07.141 } 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "iobuf", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "iobuf_set_options", 00:15:07.141 "params": { 00:15:07.141 "small_pool_count": 8192, 00:15:07.141 "large_pool_count": 1024, 00:15:07.141 "small_bufsize": 8192, 00:15:07.141 "large_bufsize": 135168 00:15:07.141 } 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "sock", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "sock_set_default_impl", 00:15:07.141 "params": { 00:15:07.141 "impl_name": "uring" 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "sock_impl_set_options", 00:15:07.141 "params": { 00:15:07.141 "impl_name": "ssl", 00:15:07.141 "recv_buf_size": 4096, 00:15:07.141 "send_buf_size": 4096, 00:15:07.141 "enable_recv_pipe": true, 00:15:07.141 "enable_quickack": false, 00:15:07.141 "enable_placement_id": 0, 00:15:07.141 "enable_zerocopy_send_server": true, 00:15:07.141 "enable_zerocopy_send_client": false, 00:15:07.141 "zerocopy_threshold": 0, 00:15:07.141 "tls_version": 0, 00:15:07.141 "enable_ktls": false 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "sock_impl_set_options", 00:15:07.141 "params": { 00:15:07.141 "impl_name": "posix", 00:15:07.141 "recv_buf_size": 2097152, 00:15:07.141 "send_buf_size": 2097152, 00:15:07.141 "enable_recv_pipe": true, 00:15:07.141 "enable_quickack": false, 00:15:07.141 "enable_placement_id": 0, 00:15:07.141 "enable_zerocopy_send_server": true, 00:15:07.141 "enable_zerocopy_send_client": false, 00:15:07.141 "zerocopy_threshold": 0, 00:15:07.141 "tls_version": 0, 00:15:07.141 "enable_ktls": false 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "sock_impl_set_options", 00:15:07.141 "params": { 00:15:07.141 "impl_name": "uring", 00:15:07.141 "recv_buf_size": 2097152, 00:15:07.141 "send_buf_size": 2097152, 00:15:07.141 "enable_recv_pipe": true, 00:15:07.141 "enable_quickack": false, 00:15:07.141 "enable_placement_id": 0, 00:15:07.141 "enable_zerocopy_send_server": false, 00:15:07.141 "enable_zerocopy_send_client": false, 00:15:07.141 "zerocopy_threshold": 0, 00:15:07.141 "tls_version": 0, 00:15:07.141 "enable_ktls": false 00:15:07.141 } 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "vmd", 00:15:07.141 "config": [] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "accel", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "accel_set_options", 00:15:07.141 "params": { 00:15:07.141 "small_cache_size": 128, 00:15:07.141 "large_cache_size": 16, 00:15:07.141 "task_count": 2048, 00:15:07.141 "sequence_count": 2048, 00:15:07.141 "buf_count": 2048 00:15:07.141 } 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "bdev", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "bdev_set_options", 00:15:07.141 "params": { 00:15:07.141 "bdev_io_pool_size": 65535, 00:15:07.141 "bdev_io_cache_size": 256, 00:15:07.141 "bdev_auto_examine": true, 00:15:07.141 "iobuf_small_cache_size": 128, 00:15:07.141 "iobuf_large_cache_size": 16 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_raid_set_options", 00:15:07.141 "params": { 00:15:07.141 "process_window_size_kb": 1024 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_iscsi_set_options", 00:15:07.141 "params": { 00:15:07.141 "timeout_sec": 30 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_nvme_set_options", 00:15:07.141 "params": { 00:15:07.141 "action_on_timeout": "none", 00:15:07.141 "timeout_us": 0, 00:15:07.141 "timeout_admin_us": 0, 00:15:07.141 "keep_alive_timeout_ms": 10000, 00:15:07.141 "arbitration_burst": 0, 00:15:07.141 "low_priority_weight": 0, 00:15:07.141 "medium_priority_weight": 0, 00:15:07.141 "high_priority_weight": 0, 00:15:07.141 "nvme_adminq_poll_period_us": 10000, 00:15:07.141 "nvme_ioq_poll_period_us": 0, 00:15:07.141 "io_queue_requests": 0, 00:15:07.141 "delay_cmd_submit": true, 00:15:07.141 "transport_retry_count": 4, 00:15:07.141 "bdev_retry_count": 3, 00:15:07.141 "transport_ack_timeout": 0, 00:15:07.141 "ctrlr_loss_timeout_sec": 0, 00:15:07.141 "reconnect_delay_sec": 0, 00:15:07.141 "fast_io_fail_timeout_sec": 0, 00:15:07.141 "disable_auto_failback": false, 00:15:07.141 "generate_uuids": false, 00:15:07.141 "transport_tos": 0, 00:15:07.141 "nvme_error_stat": false, 00:15:07.141 "rdma_srq_size": 0, 00:15:07.141 "io_path_stat": false, 00:15:07.141 "allow_accel_sequence": false, 00:15:07.141 "rdma_max_cq_size": 0, 00:15:07.141 "rdma_cm_event_timeout_ms": 0, 00:15:07.141 "dhchap_digests": [ 00:15:07.141 "sha256", 00:15:07.141 "sha384", 00:15:07.141 "sha512" 00:15:07.141 ], 00:15:07.141 "dhchap_dhgroups": [ 00:15:07.141 "null", 00:15:07.141 "ffdhe2048", 00:15:07.141 "ffdhe3072", 00:15:07.141 "ffdhe4096", 00:15:07.141 "ffdhe6144", 00:15:07.141 "ffdhe8192" 00:15:07.141 ] 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_nvme_set_hotplug", 00:15:07.141 "params": { 00:15:07.141 "period_us": 100000, 00:15:07.141 "enable": false 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_malloc_create", 00:15:07.141 "params": { 00:15:07.141 "name": "malloc0", 00:15:07.141 "num_blocks": 8192, 00:15:07.141 "block_size": 4096, 00:15:07.141 "physical_block_size": 4096, 00:15:07.141 "uuid": "3b2452cb-340a-4dc6-bcb4-5f4e172281e2", 00:15:07.141 "optimal_io_boundary": 0 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "bdev_wait_for_examine" 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "nbd", 00:15:07.141 "config": [] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "scheduler", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "framework_set_scheduler", 00:15:07.141 "params": { 00:15:07.141 "name": "static" 00:15:07.141 } 00:15:07.141 } 00:15:07.141 ] 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "subsystem": "nvmf", 00:15:07.141 "config": [ 00:15:07.141 { 00:15:07.141 "method": "nvmf_set_config", 00:15:07.141 "params": { 00:15:07.141 "discovery_filter": "match_any", 00:15:07.141 "admin_cmd_passthru": { 00:15:07.141 "identify_ctrlr": false 00:15:07.141 } 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_set_max_subsystems", 00:15:07.141 "params": { 00:15:07.141 "max_subsystems": 1024 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_set_crdt", 00:15:07.141 "params": { 00:15:07.141 "crdt1": 0, 00:15:07.141 "crdt2": 0, 00:15:07.141 "crdt3": 0 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_create_transport", 00:15:07.141 "params": { 00:15:07.141 "trtype": "TCP", 00:15:07.141 "max_queue_depth": 128, 00:15:07.141 "max_io_qpairs_per_ctrlr": 127, 00:15:07.141 "in_capsule_data_size": 4096, 00:15:07.141 "max_io_size": 131072, 00:15:07.141 "io_unit_size": 131072, 00:15:07.141 "max_aq_depth": 128, 00:15:07.141 "num_shared_buffers": 511, 00:15:07.141 "buf_cache_size": 4294967295, 00:15:07.141 "dif_insert_or_strip": false, 00:15:07.141 "zcopy": false, 00:15:07.141 "c2h_success": false, 00:15:07.141 "sock_priority": 0, 00:15:07.141 "abort_timeout_sec": 1, 00:15:07.141 "ack_timeout": 0, 00:15:07.141 "data_wr_pool_size": 0 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_create_subsystem", 00:15:07.141 "params": { 00:15:07.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.141 "allow_any_host": false, 00:15:07.141 "serial_number": "00000000000000000000", 00:15:07.141 "model_number": "SPDK bdev Controller", 00:15:07.141 "max_namespaces": 32, 00:15:07.141 "min_cntlid": 1, 00:15:07.141 "max_cntlid": 65519, 00:15:07.141 "ana_reporting": false 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_subsystem_add_host", 00:15:07.141 "params": { 00:15:07.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.141 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.141 "psk": "key0" 00:15:07.141 } 00:15:07.141 }, 00:15:07.141 { 00:15:07.141 "method": "nvmf_subsystem_add_ns", 00:15:07.141 "params": { 00:15:07.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.141 "namespace": { 00:15:07.141 "nsid": 1, 00:15:07.141 "bdev_name": "malloc0", 00:15:07.141 "nguid": "3B2452CB340A4DC6BCB45F4E172281E2", 00:15:07.141 "uuid": "3b2452cb-340a-4dc6-bcb4-5f4e172281e2", 00:15:07.141 "no_auto_visible": false 00:15:07.141 } 00:15:07.141 } 00:15:07.141 }, 00:15:07.142 { 00:15:07.142 "method": "nvmf_subsystem_add_listener", 00:15:07.142 "params": { 00:15:07.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.142 "listen_address": { 00:15:07.142 "trtype": "TCP", 00:15:07.142 "adrfam": "IPv4", 00:15:07.142 "traddr": "10.0.0.2", 00:15:07.142 "trsvcid": "4420" 00:15:07.142 }, 00:15:07.142 "secure_channel": false, 00:15:07.142 "sock_impl": "ssl" 00:15:07.142 } 00:15:07.142 } 00:15:07.142 ] 00:15:07.142 } 00:15:07.142 ] 00:15:07.142 }' 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74291 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74291 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74291 ']' 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.142 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.142 [2024-07-15 22:41:24.780850] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:07.142 [2024-07-15 22:41:24.781306] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.142 [2024-07-15 22:41:24.922151] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.401 [2024-07-15 22:41:25.059642] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.401 [2024-07-15 22:41:25.060093] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.401 [2024-07-15 22:41:25.060287] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.401 [2024-07-15 22:41:25.060454] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.401 [2024-07-15 22:41:25.060470] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.401 [2024-07-15 22:41:25.060578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.401 [2024-07-15 22:41:25.230814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.659 [2024-07-15 22:41:25.310178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.659 [2024-07-15 22:41:25.342080] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.659 [2024-07-15 22:41:25.342350] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74329 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74329 /var/tmp/bdevperf.sock 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74329 ']' 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.224 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:08.224 "subsystems": [ 00:15:08.224 { 00:15:08.224 "subsystem": "keyring", 00:15:08.224 "config": [ 00:15:08.224 { 00:15:08.224 "method": "keyring_file_add_key", 00:15:08.224 "params": { 00:15:08.224 "name": "key0", 00:15:08.224 "path": "/tmp/tmp.lLXS4YOAHK" 00:15:08.224 } 00:15:08.224 } 00:15:08.224 ] 00:15:08.224 }, 00:15:08.224 { 00:15:08.224 "subsystem": "iobuf", 00:15:08.224 "config": [ 00:15:08.224 { 00:15:08.224 "method": "iobuf_set_options", 00:15:08.224 "params": { 00:15:08.224 "small_pool_count": 8192, 00:15:08.224 "large_pool_count": 1024, 00:15:08.224 "small_bufsize": 8192, 00:15:08.224 "large_bufsize": 135168 00:15:08.224 } 00:15:08.224 } 00:15:08.224 ] 00:15:08.224 }, 00:15:08.224 { 00:15:08.224 "subsystem": "sock", 00:15:08.224 "config": [ 00:15:08.224 { 00:15:08.224 "method": "sock_set_default_impl", 00:15:08.224 "params": { 00:15:08.224 "impl_name": "uring" 00:15:08.224 } 00:15:08.224 }, 00:15:08.224 { 00:15:08.224 "method": "sock_impl_set_options", 00:15:08.224 "params": { 00:15:08.224 "impl_name": "ssl", 00:15:08.224 "recv_buf_size": 4096, 00:15:08.224 "send_buf_size": 4096, 00:15:08.224 "enable_recv_pipe": true, 00:15:08.224 "enable_quickack": false, 00:15:08.224 "enable_placement_id": 0, 00:15:08.224 "enable_zerocopy_send_server": true, 00:15:08.224 "enable_zerocopy_send_client": false, 00:15:08.224 "zerocopy_threshold": 0, 00:15:08.224 "tls_version": 0, 00:15:08.224 "enable_ktls": false 00:15:08.224 } 00:15:08.224 }, 00:15:08.224 { 00:15:08.224 "method": "sock_impl_set_options", 00:15:08.224 "params": { 00:15:08.224 "impl_name": "posix", 00:15:08.224 "recv_buf_size": 2097152, 00:15:08.224 "send_buf_size": 2097152, 00:15:08.224 "enable_recv_pipe": true, 00:15:08.224 "enable_quickack": false, 00:15:08.224 "enable_placement_id": 0, 00:15:08.224 "enable_zerocopy_send_server": true, 00:15:08.224 "enable_zerocopy_send_client": false, 00:15:08.224 "zerocopy_threshold": 0, 00:15:08.224 "tls_version": 0, 00:15:08.224 "enable_ktls": false 00:15:08.224 } 00:15:08.224 }, 00:15:08.224 { 00:15:08.224 "method": "sock_impl_set_options", 00:15:08.224 "params": { 00:15:08.224 "impl_name": "uring", 00:15:08.224 "recv_buf_size": 2097152, 00:15:08.224 "send_buf_size": 2097152, 00:15:08.224 "enable_recv_pipe": true, 00:15:08.224 "enable_quickack": false, 00:15:08.224 "enable_placement_id": 0, 00:15:08.224 "enable_zerocopy_send_server": false, 00:15:08.224 "enable_zerocopy_send_client": false, 00:15:08.224 "zerocopy_threshold": 0, 00:15:08.224 "tls_version": 0, 00:15:08.224 "enable_ktls": false 00:15:08.224 } 00:15:08.224 } 00:15:08.224 ] 00:15:08.224 }, 00:15:08.224 { 00:15:08.225 "subsystem": "vmd", 00:15:08.225 "config": [] 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "subsystem": "accel", 00:15:08.225 "config": [ 00:15:08.225 { 00:15:08.225 "method": "accel_set_options", 00:15:08.225 "params": { 00:15:08.225 "small_cache_size": 128, 00:15:08.225 "large_cache_size": 16, 00:15:08.225 "task_count": 2048, 00:15:08.225 "sequence_count": 2048, 00:15:08.225 "buf_count": 2048 00:15:08.225 } 00:15:08.225 } 00:15:08.225 ] 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "subsystem": "bdev", 00:15:08.225 "config": [ 00:15:08.225 { 00:15:08.225 "method": "bdev_set_options", 00:15:08.225 "params": { 00:15:08.225 "bdev_io_pool_size": 65535, 00:15:08.225 "bdev_io_cache_size": 256, 00:15:08.225 "bdev_auto_examine": true, 00:15:08.225 "iobuf_small_cache_size": 128, 00:15:08.225 "iobuf_large_cache_size": 16 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_raid_set_options", 00:15:08.225 "params": { 00:15:08.225 "process_window_size_kb": 1024 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_iscsi_set_options", 00:15:08.225 "params": { 00:15:08.225 "timeout_sec": 30 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_nvme_set_options", 00:15:08.225 "params": { 00:15:08.225 "action_on_timeout": "none", 00:15:08.225 "timeout_us": 0, 00:15:08.225 "timeout_admin_us": 0, 00:15:08.225 "keep_alive_timeout_ms": 10000, 00:15:08.225 "arbitration_burst": 0, 00:15:08.225 "low_priority_weight": 0, 00:15:08.225 "medium_priority_weight": 0, 00:15:08.225 "high_priority_weight": 0, 00:15:08.225 "nvme_adminq_poll_period_us": 10000, 00:15:08.225 "nvme_ioq_poll_period_us": 0, 00:15:08.225 "io_queue_requests": 512, 00:15:08.225 "delay_cmd_submit": true, 00:15:08.225 "transport_retry_count": 4, 00:15:08.225 "bdev_retry_count": 3, 00:15:08.225 "transport_ack_timeout": 0, 00:15:08.225 "ctrlr_loss_timeout_sec": 0, 00:15:08.225 "reconnect_delay_sec": 0, 00:15:08.225 "fast_io_fail_timeout_sec": 0, 00:15:08.225 "disable_auto_failback": false, 00:15:08.225 "generate_uuids": false, 00:15:08.225 "transport_tos": 0, 00:15:08.225 "nvme_error_stat": false, 00:15:08.225 "rdma_srq_size": 0, 00:15:08.225 "io_path_stat": false, 00:15:08.225 "allow_accel_sequence": false, 00:15:08.225 "rdma_max_cq_size": 0, 00:15:08.225 "rdma_cm_event_timeout_ms": 0, 00:15:08.225 "dhchap_digests": [ 00:15:08.225 "sha256", 00:15:08.225 "sha384", 00:15:08.225 "sha512" 00:15:08.225 ], 00:15:08.225 "dhchap_dhgroups": [ 00:15:08.225 "null", 00:15:08.225 "ffdhe2048", 00:15:08.225 "ffdhe3072", 00:15:08.225 "ffdhe4096", 00:15:08.225 "ffdhe6144", 00:15:08.225 "ffdhe8192" 00:15:08.225 ] 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_nvme_attach_controller", 00:15:08.225 "params": { 00:15:08.225 "name": "nvme0", 00:15:08.225 "trtype": "TCP", 00:15:08.225 "adrfam": "IPv4", 00:15:08.225 "traddr": "10.0.0.2", 00:15:08.225 "trsvcid": "4420", 00:15:08.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.225 "prchk_reftag": false, 00:15:08.225 "prchk_guard": false, 00:15:08.225 "ctrlr_loss_timeout_sec": 0, 00:15:08.225 "reconnect_delay_sec": 0, 00:15:08.225 "fast_io_fail_timeout_sec": 0, 00:15:08.225 "psk": "key0", 00:15:08.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.225 "hdgst": false, 00:15:08.225 "ddgst": false 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_nvme_set_hotplug", 00:15:08.225 "params": { 00:15:08.225 "period_us": 100000, 00:15:08.225 "enable": false 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_enable_histogram", 00:15:08.225 "params": { 00:15:08.225 "name": "nvme0n1", 00:15:08.225 "enable": true 00:15:08.225 } 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "method": "bdev_wait_for_examine" 00:15:08.225 } 00:15:08.225 ] 00:15:08.225 }, 00:15:08.225 { 00:15:08.225 "subsystem": "nbd", 00:15:08.225 "config": [] 00:15:08.225 } 00:15:08.225 ] 00:15:08.225 }' 00:15:08.225 [2024-07-15 22:41:25.997425] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:08.225 [2024-07-15 22:41:25.997790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74329 ] 00:15:08.483 [2024-07-15 22:41:26.140408] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.483 [2024-07-15 22:41:26.268816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.740 [2024-07-15 22:41:26.407613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.740 [2024-07-15 22:41:26.458171] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.304 22:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.304 22:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:09.304 22:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:09.304 22:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:09.561 22:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.561 22:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.818 Running I/O for 1 seconds... 00:15:10.752 00:15:10.752 Latency(us) 00:15:10.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.752 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:10.752 Verification LBA range: start 0x0 length 0x2000 00:15:10.752 nvme0n1 : 1.03 3250.51 12.70 0.00 0.00 38856.93 8936.73 38130.04 00:15:10.752 =================================================================================================================== 00:15:10.752 Total : 3250.51 12.70 0.00 0.00 38856.93 8936.73 38130.04 00:15:10.752 0 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:10.752 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:10.753 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:10.753 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:10.753 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:10.753 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:10.753 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:10.753 nvmf_trace.0 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74329 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74329 ']' 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74329 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74329 00:15:11.011 killing process with pid 74329 00:15:11.011 Received shutdown signal, test time was about 1.000000 seconds 00:15:11.011 00:15:11.011 Latency(us) 00:15:11.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.011 =================================================================================================================== 00:15:11.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74329' 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74329 00:15:11.011 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74329 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.268 rmmod nvme_tcp 00:15:11.268 rmmod nvme_fabrics 00:15:11.268 rmmod nvme_keyring 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74291 ']' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74291 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74291 ']' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74291 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74291 00:15:11.268 killing process with pid 74291 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74291' 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74291 00:15:11.268 22:41:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74291 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.INg4NQRaoZ /tmp/tmp.zhBu0RObRN /tmp/tmp.lLXS4YOAHK 00:15:11.525 00:15:11.525 real 1m28.841s 00:15:11.525 user 2m20.824s 00:15:11.525 sys 0m28.927s 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.525 22:41:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 ************************************ 00:15:11.525 END TEST nvmf_tls 00:15:11.525 ************************************ 00:15:11.525 22:41:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.525 22:41:29 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.525 22:41:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.525 22:41:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.525 22:41:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 ************************************ 00:15:11.525 START TEST nvmf_fips 00:15:11.525 ************************************ 00:15:11.526 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.784 * Looking for test storage... 00:15:11.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:11.784 Error setting digest 00:15:11.784 00B2C658F57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:11.784 00B2C658F57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:11.784 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.785 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:12.043 Cannot find device "nvmf_tgt_br" 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.043 Cannot find device "nvmf_tgt_br2" 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:12.043 Cannot find device "nvmf_tgt_br" 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:12.043 Cannot find device "nvmf_tgt_br2" 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.043 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:12.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:15:12.301 00:15:12.301 --- 10.0.0.2 ping statistics --- 00:15:12.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.301 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:12.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:12.301 00:15:12.301 --- 10.0.0.3 ping statistics --- 00:15:12.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.301 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:12.301 00:15:12.301 --- 10.0.0.1 ping statistics --- 00:15:12.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.301 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74590 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74590 00:15:12.301 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74590 ']' 00:15:12.302 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.302 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.302 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.302 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.302 22:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.302 [2024-07-15 22:41:30.026002] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:12.302 [2024-07-15 22:41:30.026101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.559 [2024-07-15 22:41:30.165276] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.559 [2024-07-15 22:41:30.285681] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.559 [2024-07-15 22:41:30.285738] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.559 [2024-07-15 22:41:30.285753] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.559 [2024-07-15 22:41:30.285763] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.559 [2024-07-15 22:41:30.285772] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.559 [2024-07-15 22:41:30.285810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.559 [2024-07-15 22:41:30.341612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:13.496 22:41:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.496 22:41:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:13.496 22:41:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.496 22:41:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.496 22:41:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.496 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.496 [2024-07-15 22:41:31.289056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.496 [2024-07-15 22:41:31.305013] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.496 [2024-07-15 22:41:31.305229] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.754 [2024-07-15 22:41:31.336924] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:13.754 malloc0 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74635 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74635 /var/tmp/bdevperf.sock 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74635 ']' 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.754 22:41:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.754 [2024-07-15 22:41:31.442054] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:13.754 [2024-07-15 22:41:31.442184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74635 ] 00:15:13.754 [2024-07-15 22:41:31.579608] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.012 [2024-07-15 22:41:31.711629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.012 [2024-07-15 22:41:31.770234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.945 22:41:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.945 22:41:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:14.945 22:41:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:14.945 [2024-07-15 22:41:32.755221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.945 [2024-07-15 22:41:32.755364] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:15.203 TLSTESTn1 00:15:15.203 22:41:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.203 Running I/O for 10 seconds... 00:15:25.234 00:15:25.234 Latency(us) 00:15:25.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.234 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.234 Verification LBA range: start 0x0 length 0x2000 00:15:25.234 TLSTESTn1 : 10.02 3896.72 15.22 0.00 0.00 32784.88 6553.60 25380.31 00:15:25.234 =================================================================================================================== 00:15:25.234 Total : 3896.72 15.22 0.00 0.00 32784.88 6553.60 25380.31 00:15:25.234 0 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:25.234 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:25.234 nvmf_trace.0 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74635 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74635 ']' 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74635 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74635 00:15:25.492 killing process with pid 74635 00:15:25.492 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.492 00:15:25.492 Latency(us) 00:15:25.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.492 =================================================================================================================== 00:15:25.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74635' 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74635 00:15:25.492 [2024-07-15 22:41:43.143596] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:25.492 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74635 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.750 rmmod nvme_tcp 00:15:25.750 rmmod nvme_fabrics 00:15:25.750 rmmod nvme_keyring 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74590 ']' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74590 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74590 ']' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74590 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74590 00:15:25.750 killing process with pid 74590 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74590' 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74590 00:15:25.750 [2024-07-15 22:41:43.504083] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:25.750 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74590 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.007 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:26.008 00:15:26.008 real 0m14.499s 00:15:26.008 user 0m19.922s 00:15:26.008 sys 0m5.871s 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.008 ************************************ 00:15:26.008 END TEST nvmf_fips 00:15:26.008 ************************************ 00:15:26.008 22:41:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:26.265 22:41:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.265 22:41:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.265 ************************************ 00:15:26.265 START TEST nvmf_identify 00:15:26.265 ************************************ 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:26.265 * Looking for test storage... 00:15:26.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.265 22:41:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:26.265 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:26.266 Cannot find device "nvmf_tgt_br" 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.266 Cannot find device "nvmf_tgt_br2" 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:26.266 Cannot find device "nvmf_tgt_br" 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:26.266 Cannot find device "nvmf_tgt_br2" 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:26.266 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:26.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:15:26.523 00:15:26.523 --- 10.0.0.2 ping statistics --- 00:15:26.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.523 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:26.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:26.523 00:15:26.523 --- 10.0.0.3 ping statistics --- 00:15:26.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.523 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:26.523 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:26.781 00:15:26.781 --- 10.0.0.1 ping statistics --- 00:15:26.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.781 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74987 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74987 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74987 ']' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.781 22:41:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.781 [2024-07-15 22:41:44.467243] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:26.781 [2024-07-15 22:41:44.467688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.038 [2024-07-15 22:41:44.615353] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.038 [2024-07-15 22:41:44.765210] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.038 [2024-07-15 22:41:44.765547] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.038 [2024-07-15 22:41:44.765583] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.038 [2024-07-15 22:41:44.765592] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.038 [2024-07-15 22:41:44.765616] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.038 [2024-07-15 22:41:44.765769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.038 [2024-07-15 22:41:44.765908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.038 [2024-07-15 22:41:44.766701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.038 [2024-07-15 22:41:44.766733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.038 [2024-07-15 22:41:44.819706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 [2024-07-15 22:41:45.537540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 Malloc0 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 [2024-07-15 22:41:45.641350] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 [ 00:15:27.971 { 00:15:27.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.971 "subtype": "Discovery", 00:15:27.971 "listen_addresses": [ 00:15:27.971 { 00:15:27.971 "trtype": "TCP", 00:15:27.971 "adrfam": "IPv4", 00:15:27.971 "traddr": "10.0.0.2", 00:15:27.971 "trsvcid": "4420" 00:15:27.971 } 00:15:27.971 ], 00:15:27.971 "allow_any_host": true, 00:15:27.971 "hosts": [] 00:15:27.971 }, 00:15:27.971 { 00:15:27.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.971 "subtype": "NVMe", 00:15:27.971 "listen_addresses": [ 00:15:27.971 { 00:15:27.971 "trtype": "TCP", 00:15:27.971 "adrfam": "IPv4", 00:15:27.971 "traddr": "10.0.0.2", 00:15:27.971 "trsvcid": "4420" 00:15:27.971 } 00:15:27.971 ], 00:15:27.971 "allow_any_host": true, 00:15:27.971 "hosts": [], 00:15:27.971 "serial_number": "SPDK00000000000001", 00:15:27.971 "model_number": "SPDK bdev Controller", 00:15:27.971 "max_namespaces": 32, 00:15:27.971 "min_cntlid": 1, 00:15:27.971 "max_cntlid": 65519, 00:15:27.971 "namespaces": [ 00:15:27.971 { 00:15:27.971 "nsid": 1, 00:15:27.971 "bdev_name": "Malloc0", 00:15:27.971 "name": "Malloc0", 00:15:27.971 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:27.971 "eui64": "ABCDEF0123456789", 00:15:27.971 "uuid": "d2d33570-26d2-464d-b60d-5fe7bca91a7b" 00:15:27.971 } 00:15:27.971 ] 00:15:27.971 } 00:15:27.971 ] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.971 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:27.971 [2024-07-15 22:41:45.697260] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:27.971 [2024-07-15 22:41:45.697527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75022 ] 00:15:28.234 [2024-07-15 22:41:45.838159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:28.234 [2024-07-15 22:41:45.838257] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:28.234 [2024-07-15 22:41:45.838265] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:28.234 [2024-07-15 22:41:45.838280] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:28.234 [2024-07-15 22:41:45.838288] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:28.234 [2024-07-15 22:41:45.838650] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:28.234 [2024-07-15 22:41:45.838729] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bc4510 0 00:15:28.234 [2024-07-15 22:41:45.842900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:28.234 [2024-07-15 22:41:45.842928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:28.234 [2024-07-15 22:41:45.842935] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:28.234 [2024-07-15 22:41:45.842939] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:28.234 [2024-07-15 22:41:45.842986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.234 [2024-07-15 22:41:45.842994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.234 [2024-07-15 22:41:45.843001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.234 [2024-07-15 22:41:45.843021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:28.234 [2024-07-15 22:41:45.843057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.234 [2024-07-15 22:41:45.850890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.234 [2024-07-15 22:41:45.850930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.234 [2024-07-15 22:41:45.850937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.234 [2024-07-15 22:41:45.850943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.234 [2024-07-15 22:41:45.850959] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:28.234 [2024-07-15 22:41:45.850970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:28.234 [2024-07-15 22:41:45.850978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:28.234 [2024-07-15 22:41:45.851004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.234 [2024-07-15 22:41:45.851011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.234 [2024-07-15 22:41:45.851015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.234 [2024-07-15 22:41:45.851030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.234 [2024-07-15 22:41:45.851065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.234 [2024-07-15 22:41:45.851148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851160] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:28.235 [2024-07-15 22:41:45.851178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:28.235 [2024-07-15 22:41:45.851186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.851203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.851223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.851285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851307] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:28.235 [2024-07-15 22:41:45.851316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.851341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.851360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.851414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.851461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.851479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.851530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851550] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:28.235 [2024-07-15 22:41:45.851555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851669] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:28.235 [2024-07-15 22:41:45.851692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.851719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.851738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.851799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:28.235 [2024-07-15 22:41:45.851830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.851846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.851864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.851937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.851945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.851949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.851953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.851958] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:28.235 [2024-07-15 22:41:45.851963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:28.235 [2024-07-15 22:41:45.851972] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:28.235 [2024-07-15 22:41:45.851983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:28.235 [2024-07-15 22:41:45.851996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.852009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.235 [2024-07-15 22:41:45.852030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.852136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.235 [2024-07-15 22:41:45.852145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.235 [2024-07-15 22:41:45.852150] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852154] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc4510): datao=0, datal=4096, cccid=0 00:15:28.235 [2024-07-15 22:41:45.852159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26f00) on tqpair(0x1bc4510): expected_datao=0, payload_size=4096 00:15:28.235 [2024-07-15 22:41:45.852164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852174] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852179] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.852194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.852198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.852212] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:28.235 [2024-07-15 22:41:45.852217] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:28.235 [2024-07-15 22:41:45.852227] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:28.235 [2024-07-15 22:41:45.852233] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:28.235 [2024-07-15 22:41:45.852238] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:28.235 [2024-07-15 22:41:45.852243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:28.235 [2024-07-15 22:41:45.852253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:28.235 [2024-07-15 22:41:45.852262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.852280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:28.235 [2024-07-15 22:41:45.852302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.235 [2024-07-15 22:41:45.852367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.235 [2024-07-15 22:41:45.852374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.235 [2024-07-15 22:41:45.852378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.235 [2024-07-15 22:41:45.852390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.852405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.235 [2024-07-15 22:41:45.852413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bc4510) 00:15:28.235 [2024-07-15 22:41:45.852426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.235 [2024-07-15 22:41:45.852433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.235 [2024-07-15 22:41:45.852437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.852447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.236 [2024-07-15 22:41:45.852454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.852468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.236 [2024-07-15 22:41:45.852473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:28.236 [2024-07-15 22:41:45.852487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:28.236 [2024-07-15 22:41:45.852495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.852506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.236 [2024-07-15 22:41:45.852528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26f00, cid 0, qid 0 00:15:28.236 [2024-07-15 22:41:45.852535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27080, cid 1, qid 0 00:15:28.236 [2024-07-15 22:41:45.852540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27200, cid 2, qid 0 00:15:28.236 [2024-07-15 22:41:45.852545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.236 [2024-07-15 22:41:45.852550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27500, cid 4, qid 0 00:15:28.236 [2024-07-15 22:41:45.852671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.852679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.852683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27500) on tqpair=0x1bc4510 00:15:28.236 [2024-07-15 22:41:45.852693] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:28.236 [2024-07-15 22:41:45.852699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:28.236 [2024-07-15 22:41:45.852712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.852724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.236 [2024-07-15 22:41:45.852744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27500, cid 4, qid 0 00:15:28.236 [2024-07-15 22:41:45.852838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.236 [2024-07-15 22:41:45.852855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.236 [2024-07-15 22:41:45.852860] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852864] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc4510): datao=0, datal=4096, cccid=4 00:15:28.236 [2024-07-15 22:41:45.852882] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c27500) on tqpair(0x1bc4510): expected_datao=0, payload_size=4096 00:15:28.236 [2024-07-15 22:41:45.852888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852896] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852900] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.852915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.852919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27500) on tqpair=0x1bc4510 00:15:28.236 [2024-07-15 22:41:45.852939] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:28.236 [2024-07-15 22:41:45.852974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.852981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.852989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.236 [2024-07-15 22:41:45.852997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.853011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.236 [2024-07-15 22:41:45.853039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27500, cid 4, qid 0 00:15:28.236 [2024-07-15 22:41:45.853047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27680, cid 5, qid 0 00:15:28.236 [2024-07-15 22:41:45.853186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.236 [2024-07-15 22:41:45.853196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.236 [2024-07-15 22:41:45.853200] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853204] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc4510): datao=0, datal=1024, cccid=4 00:15:28.236 [2024-07-15 22:41:45.853208] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c27500) on tqpair(0x1bc4510): expected_datao=0, payload_size=1024 00:15:28.236 [2024-07-15 22:41:45.853213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853221] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853225] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.853237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.853240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27680) on tqpair=0x1bc4510 00:15:28.236 [2024-07-15 22:41:45.853264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.853272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.853276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27500) on tqpair=0x1bc4510 00:15:28.236 [2024-07-15 22:41:45.853294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.853307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.236 [2024-07-15 22:41:45.853331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27500, cid 4, qid 0 00:15:28.236 [2024-07-15 22:41:45.853409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.236 [2024-07-15 22:41:45.853416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.236 [2024-07-15 22:41:45.853420] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853424] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc4510): datao=0, datal=3072, cccid=4 00:15:28.236 [2024-07-15 22:41:45.853429] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c27500) on tqpair(0x1bc4510): expected_datao=0, payload_size=3072 00:15:28.236 [2024-07-15 22:41:45.853433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853441] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853445] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.853459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.853463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27500) on tqpair=0x1bc4510 00:15:28.236 [2024-07-15 22:41:45.853477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc4510) 00:15:28.236 [2024-07-15 22:41:45.853490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.236 [2024-07-15 22:41:45.853513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27500, cid 4, qid 0 00:15:28.236 [2024-07-15 22:41:45.853591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.236 [2024-07-15 22:41:45.853604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.236 [2024-07-15 22:41:45.853608] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853612] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc4510): datao=0, datal=8, cccid=4 00:15:28.236 [2024-07-15 22:41:45.853618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c27500) on tqpair(0x1bc4510): expected_datao=0, payload_size=8 00:15:28.236 [2024-07-15 22:41:45.853622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853630] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853634] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.236 [2024-07-15 22:41:45.853658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.236 [2024-07-15 22:41:45.853662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.236 [2024-07-15 22:41:45.853666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27500) on tqpair=0x1bc4510 00:15:28.236 ===================================================== 00:15:28.236 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:28.236 ===================================================== 00:15:28.236 Controller Capabilities/Features 00:15:28.236 ================================ 00:15:28.236 Vendor ID: 0000 00:15:28.236 Subsystem Vendor ID: 0000 00:15:28.236 Serial Number: .................... 00:15:28.236 Model Number: ........................................ 00:15:28.236 Firmware Version: 24.09 00:15:28.236 Recommended Arb Burst: 0 00:15:28.236 IEEE OUI Identifier: 00 00 00 00:15:28.236 Multi-path I/O 00:15:28.236 May have multiple subsystem ports: No 00:15:28.236 May have multiple controllers: No 00:15:28.236 Associated with SR-IOV VF: No 00:15:28.236 Max Data Transfer Size: 131072 00:15:28.236 Max Number of Namespaces: 0 00:15:28.236 Max Number of I/O Queues: 1024 00:15:28.236 NVMe Specification Version (VS): 1.3 00:15:28.236 NVMe Specification Version (Identify): 1.3 00:15:28.236 Maximum Queue Entries: 128 00:15:28.236 Contiguous Queues Required: Yes 00:15:28.236 Arbitration Mechanisms Supported 00:15:28.236 Weighted Round Robin: Not Supported 00:15:28.236 Vendor Specific: Not Supported 00:15:28.237 Reset Timeout: 15000 ms 00:15:28.237 Doorbell Stride: 4 bytes 00:15:28.237 NVM Subsystem Reset: Not Supported 00:15:28.237 Command Sets Supported 00:15:28.237 NVM Command Set: Supported 00:15:28.237 Boot Partition: Not Supported 00:15:28.237 Memory Page Size Minimum: 4096 bytes 00:15:28.237 Memory Page Size Maximum: 4096 bytes 00:15:28.237 Persistent Memory Region: Not Supported 00:15:28.237 Optional Asynchronous Events Supported 00:15:28.237 Namespace Attribute Notices: Not Supported 00:15:28.237 Firmware Activation Notices: Not Supported 00:15:28.237 ANA Change Notices: Not Supported 00:15:28.237 PLE Aggregate Log Change Notices: Not Supported 00:15:28.237 LBA Status Info Alert Notices: Not Supported 00:15:28.237 EGE Aggregate Log Change Notices: Not Supported 00:15:28.237 Normal NVM Subsystem Shutdown event: Not Supported 00:15:28.237 Zone Descriptor Change Notices: Not Supported 00:15:28.237 Discovery Log Change Notices: Supported 00:15:28.237 Controller Attributes 00:15:28.237 128-bit Host Identifier: Not Supported 00:15:28.237 Non-Operational Permissive Mode: Not Supported 00:15:28.237 NVM Sets: Not Supported 00:15:28.237 Read Recovery Levels: Not Supported 00:15:28.237 Endurance Groups: Not Supported 00:15:28.237 Predictable Latency Mode: Not Supported 00:15:28.237 Traffic Based Keep ALive: Not Supported 00:15:28.237 Namespace Granularity: Not Supported 00:15:28.237 SQ Associations: Not Supported 00:15:28.237 UUID List: Not Supported 00:15:28.237 Multi-Domain Subsystem: Not Supported 00:15:28.237 Fixed Capacity Management: Not Supported 00:15:28.237 Variable Capacity Management: Not Supported 00:15:28.237 Delete Endurance Group: Not Supported 00:15:28.237 Delete NVM Set: Not Supported 00:15:28.237 Extended LBA Formats Supported: Not Supported 00:15:28.237 Flexible Data Placement Supported: Not Supported 00:15:28.237 00:15:28.237 Controller Memory Buffer Support 00:15:28.237 ================================ 00:15:28.237 Supported: No 00:15:28.237 00:15:28.237 Persistent Memory Region Support 00:15:28.237 ================================ 00:15:28.237 Supported: No 00:15:28.237 00:15:28.237 Admin Command Set Attributes 00:15:28.237 ============================ 00:15:28.237 Security Send/Receive: Not Supported 00:15:28.237 Format NVM: Not Supported 00:15:28.237 Firmware Activate/Download: Not Supported 00:15:28.237 Namespace Management: Not Supported 00:15:28.237 Device Self-Test: Not Supported 00:15:28.237 Directives: Not Supported 00:15:28.237 NVMe-MI: Not Supported 00:15:28.237 Virtualization Management: Not Supported 00:15:28.237 Doorbell Buffer Config: Not Supported 00:15:28.237 Get LBA Status Capability: Not Supported 00:15:28.237 Command & Feature Lockdown Capability: Not Supported 00:15:28.237 Abort Command Limit: 1 00:15:28.237 Async Event Request Limit: 4 00:15:28.237 Number of Firmware Slots: N/A 00:15:28.237 Firmware Slot 1 Read-Only: N/A 00:15:28.237 Firmware Activation Without Reset: N/A 00:15:28.237 Multiple Update Detection Support: N/A 00:15:28.237 Firmware Update Granularity: No Information Provided 00:15:28.237 Per-Namespace SMART Log: No 00:15:28.237 Asymmetric Namespace Access Log Page: Not Supported 00:15:28.237 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:28.237 Command Effects Log Page: Not Supported 00:15:28.237 Get Log Page Extended Data: Supported 00:15:28.237 Telemetry Log Pages: Not Supported 00:15:28.237 Persistent Event Log Pages: Not Supported 00:15:28.237 Supported Log Pages Log Page: May Support 00:15:28.237 Commands Supported & Effects Log Page: Not Supported 00:15:28.237 Feature Identifiers & Effects Log Page:May Support 00:15:28.237 NVMe-MI Commands & Effects Log Page: May Support 00:15:28.237 Data Area 4 for Telemetry Log: Not Supported 00:15:28.237 Error Log Page Entries Supported: 128 00:15:28.237 Keep Alive: Not Supported 00:15:28.237 00:15:28.237 NVM Command Set Attributes 00:15:28.237 ========================== 00:15:28.237 Submission Queue Entry Size 00:15:28.237 Max: 1 00:15:28.237 Min: 1 00:15:28.237 Completion Queue Entry Size 00:15:28.237 Max: 1 00:15:28.237 Min: 1 00:15:28.237 Number of Namespaces: 0 00:15:28.237 Compare Command: Not Supported 00:15:28.237 Write Uncorrectable Command: Not Supported 00:15:28.237 Dataset Management Command: Not Supported 00:15:28.237 Write Zeroes Command: Not Supported 00:15:28.237 Set Features Save Field: Not Supported 00:15:28.237 Reservations: Not Supported 00:15:28.237 Timestamp: Not Supported 00:15:28.237 Copy: Not Supported 00:15:28.237 Volatile Write Cache: Not Present 00:15:28.237 Atomic Write Unit (Normal): 1 00:15:28.237 Atomic Write Unit (PFail): 1 00:15:28.237 Atomic Compare & Write Unit: 1 00:15:28.237 Fused Compare & Write: Supported 00:15:28.237 Scatter-Gather List 00:15:28.237 SGL Command Set: Supported 00:15:28.237 SGL Keyed: Supported 00:15:28.237 SGL Bit Bucket Descriptor: Not Supported 00:15:28.237 SGL Metadata Pointer: Not Supported 00:15:28.237 Oversized SGL: Not Supported 00:15:28.237 SGL Metadata Address: Not Supported 00:15:28.237 SGL Offset: Supported 00:15:28.237 Transport SGL Data Block: Not Supported 00:15:28.237 Replay Protected Memory Block: Not Supported 00:15:28.237 00:15:28.237 Firmware Slot Information 00:15:28.237 ========================= 00:15:28.237 Active slot: 0 00:15:28.237 00:15:28.237 00:15:28.237 Error Log 00:15:28.237 ========= 00:15:28.237 00:15:28.237 Active Namespaces 00:15:28.237 ================= 00:15:28.237 Discovery Log Page 00:15:28.237 ================== 00:15:28.237 Generation Counter: 2 00:15:28.237 Number of Records: 2 00:15:28.237 Record Format: 0 00:15:28.237 00:15:28.237 Discovery Log Entry 0 00:15:28.237 ---------------------- 00:15:28.237 Transport Type: 3 (TCP) 00:15:28.237 Address Family: 1 (IPv4) 00:15:28.237 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:28.237 Entry Flags: 00:15:28.237 Duplicate Returned Information: 1 00:15:28.237 Explicit Persistent Connection Support for Discovery: 1 00:15:28.237 Transport Requirements: 00:15:28.237 Secure Channel: Not Required 00:15:28.237 Port ID: 0 (0x0000) 00:15:28.237 Controller ID: 65535 (0xffff) 00:15:28.237 Admin Max SQ Size: 128 00:15:28.237 Transport Service Identifier: 4420 00:15:28.237 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:28.237 Transport Address: 10.0.0.2 00:15:28.237 Discovery Log Entry 1 00:15:28.237 ---------------------- 00:15:28.237 Transport Type: 3 (TCP) 00:15:28.237 Address Family: 1 (IPv4) 00:15:28.237 Subsystem Type: 2 (NVM Subsystem) 00:15:28.237 Entry Flags: 00:15:28.237 Duplicate Returned Information: 0 00:15:28.237 Explicit Persistent Connection Support for Discovery: 0 00:15:28.237 Transport Requirements: 00:15:28.237 Secure Channel: Not Required 00:15:28.237 Port ID: 0 (0x0000) 00:15:28.237 Controller ID: 65535 (0xffff) 00:15:28.237 Admin Max SQ Size: 128 00:15:28.237 Transport Service Identifier: 4420 00:15:28.237 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:28.237 Transport Address: 10.0.0.2 [2024-07-15 22:41:45.853774] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:28.237 [2024-07-15 22:41:45.853789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c26f00) on tqpair=0x1bc4510 00:15:28.237 [2024-07-15 22:41:45.853797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.237 [2024-07-15 22:41:45.853803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27080) on tqpair=0x1bc4510 00:15:28.237 [2024-07-15 22:41:45.853808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.237 [2024-07-15 22:41:45.853813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27200) on tqpair=0x1bc4510 00:15:28.237 [2024-07-15 22:41:45.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.237 [2024-07-15 22:41:45.853823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.237 [2024-07-15 22:41:45.853828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.237 [2024-07-15 22:41:45.853838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.237 [2024-07-15 22:41:45.853843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.237 [2024-07-15 22:41:45.853847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.237 [2024-07-15 22:41:45.853854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.237 [2024-07-15 22:41:45.853889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.237 [2024-07-15 22:41:45.853956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.237 [2024-07-15 22:41:45.853963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.237 [2024-07-15 22:41:45.853967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.237 [2024-07-15 22:41:45.853971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.237 [2024-07-15 22:41:45.853985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.237 [2024-07-15 22:41:45.853990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.237 [2024-07-15 22:41:45.853994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854127] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:28.238 [2024-07-15 22:41:45.854132] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:28.238 [2024-07-15 22:41:45.854142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.854747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.854751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.854766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.854774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.854782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.854800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.854862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.858884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.858904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.858910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.858927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.858954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.858959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc4510) 00:15:28.238 [2024-07-15 22:41:45.858968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:45.858998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c27380, cid 3, qid 0 00:15:28.238 [2024-07-15 22:41:45.859069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:45.859077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:45.859081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:45.859085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c27380) on tqpair=0x1bc4510 00:15:28.238 [2024-07-15 22:41:45.859093] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:28.238 00:15:28.238 22:41:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:28.238 [2024-07-15 22:41:45.901225] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:28.238 [2024-07-15 22:41:45.901278] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:15:28.238 [2024-07-15 22:41:46.038161] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:28.238 [2024-07-15 22:41:46.038254] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:28.238 [2024-07-15 22:41:46.038263] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:28.238 [2024-07-15 22:41:46.038277] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:28.238 [2024-07-15 22:41:46.038284] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:28.238 [2024-07-15 22:41:46.038643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:28.238 [2024-07-15 22:41:46.038739] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb36510 0 00:15:28.238 [2024-07-15 22:41:46.042896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:28.238 [2024-07-15 22:41:46.042921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:28.238 [2024-07-15 22:41:46.042927] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:28.238 [2024-07-15 22:41:46.042931] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:28.238 [2024-07-15 22:41:46.042975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:46.042983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:46.042987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.238 [2024-07-15 22:41:46.043003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:28.238 [2024-07-15 22:41:46.043036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.238 [2024-07-15 22:41:46.050886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:46.050908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:46.050913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:46.050919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.238 [2024-07-15 22:41:46.050934] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:28.238 [2024-07-15 22:41:46.050943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:28.238 [2024-07-15 22:41:46.050950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:28.238 [2024-07-15 22:41:46.050969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:46.050975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.238 [2024-07-15 22:41:46.050979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.238 [2024-07-15 22:41:46.050989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.238 [2024-07-15 22:41:46.051018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.238 [2024-07-15 22:41:46.051072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.238 [2024-07-15 22:41:46.051080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.238 [2024-07-15 22:41:46.051084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051094] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:28.239 [2024-07-15 22:41:46.051102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:28.239 [2024-07-15 22:41:46.051110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.051204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.051208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051219] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:28.239 [2024-07-15 22:41:46.051228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.051324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.051328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.051431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.051435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051444] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:28.239 [2024-07-15 22:41:46.051450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051566] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:28.239 [2024-07-15 22:41:46.051573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.051668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.051672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:28.239 [2024-07-15 22:41:46.051693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.051796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.051801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.051810] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:28.239 [2024-07-15 22:41:46.051816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:28.239 [2024-07-15 22:41:46.051825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:28.239 [2024-07-15 22:41:46.051839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:28.239 [2024-07-15 22:41:46.051850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.051855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.051863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.239 [2024-07-15 22:41:46.051900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.051986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.239 [2024-07-15 22:41:46.051994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.239 [2024-07-15 22:41:46.051998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=4096, cccid=0 00:15:28.239 [2024-07-15 22:41:46.052007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb98f00) on tqpair(0xb36510): expected_datao=0, payload_size=4096 00:15:28.239 [2024-07-15 22:41:46.052013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052022] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052028] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.239 [2024-07-15 22:41:46.052043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.239 [2024-07-15 22:41:46.052047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.239 [2024-07-15 22:41:46.052061] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:28.239 [2024-07-15 22:41:46.052067] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:28.239 [2024-07-15 22:41:46.052077] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:28.239 [2024-07-15 22:41:46.052083] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:28.239 [2024-07-15 22:41:46.052088] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:28.239 [2024-07-15 22:41:46.052094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:28.239 [2024-07-15 22:41:46.052104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:28.239 [2024-07-15 22:41:46.052112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.239 [2024-07-15 22:41:46.052120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.239 [2024-07-15 22:41:46.052129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:28.239 [2024-07-15 22:41:46.052150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.239 [2024-07-15 22:41:46.052196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.052207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.052214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.052232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.240 [2024-07-15 22:41:46.052255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.240 [2024-07-15 22:41:46.052276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.240 [2024-07-15 22:41:46.052304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.240 [2024-07-15 22:41:46.052324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.240 [2024-07-15 22:41:46.052383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb98f00, cid 0, qid 0 00:15:28.240 [2024-07-15 22:41:46.052390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99080, cid 1, qid 0 00:15:28.240 [2024-07-15 22:41:46.052396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99200, cid 2, qid 0 00:15:28.240 [2024-07-15 22:41:46.052401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.240 [2024-07-15 22:41:46.052406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.240 [2024-07-15 22:41:46.052489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.052496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.052500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.052510] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:28.240 [2024-07-15 22:41:46.052516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:28.240 [2024-07-15 22:41:46.052574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.240 [2024-07-15 22:41:46.052619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.052626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.052630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.052702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.240 [2024-07-15 22:41:46.052757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.240 [2024-07-15 22:41:46.052814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.240 [2024-07-15 22:41:46.052821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.240 [2024-07-15 22:41:46.052825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052829] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=4096, cccid=4 00:15:28.240 [2024-07-15 22:41:46.052843] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99500) on tqpair(0xb36510): expected_datao=0, payload_size=4096 00:15:28.240 [2024-07-15 22:41:46.052848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052856] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052860] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.052888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.052892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.052909] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:28.240 [2024-07-15 22:41:46.052922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.052943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.052948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.052956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.240 [2024-07-15 22:41:46.052978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.240 [2024-07-15 22:41:46.053064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.240 [2024-07-15 22:41:46.053074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.240 [2024-07-15 22:41:46.053078] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053082] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=4096, cccid=4 00:15:28.240 [2024-07-15 22:41:46.053087] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99500) on tqpair(0xb36510): expected_datao=0, payload_size=4096 00:15:28.240 [2024-07-15 22:41:46.053093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053100] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053105] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.053120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.053124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.053145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.240 [2024-07-15 22:41:46.053179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.240 [2024-07-15 22:41:46.053201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.240 [2024-07-15 22:41:46.053269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.240 [2024-07-15 22:41:46.053278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.240 [2024-07-15 22:41:46.053282] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053286] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=4096, cccid=4 00:15:28.240 [2024-07-15 22:41:46.053290] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99500) on tqpair(0xb36510): expected_datao=0, payload_size=4096 00:15:28.240 [2024-07-15 22:41:46.053295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053303] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053310] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.240 [2024-07-15 22:41:46.053329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.240 [2024-07-15 22:41:46.053333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.240 [2024-07-15 22:41:46.053338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.240 [2024-07-15 22:41:46.053348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:28.240 [2024-07-15 22:41:46.053382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:28.241 [2024-07-15 22:41:46.053388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:28.241 [2024-07-15 22:41:46.053394] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:28.241 [2024-07-15 22:41:46.053399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:28.241 [2024-07-15 22:41:46.053405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:28.241 [2024-07-15 22:41:46.053426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.241 [2024-07-15 22:41:46.053495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.241 [2024-07-15 22:41:46.053503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99680, cid 5, qid 0 00:15:28.241 [2024-07-15 22:41:46.053562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.053569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.053573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.053585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.053591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.053595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99680) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.053610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99680, cid 5, qid 0 00:15:28.241 [2024-07-15 22:41:46.053687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.053693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.053697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99680) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.053712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99680, cid 5, qid 0 00:15:28.241 [2024-07-15 22:41:46.053783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.053789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.053793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99680) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.053808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99680, cid 5, qid 0 00:15:28.241 [2024-07-15 22:41:46.053900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.053908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.053912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99680) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.053937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.053982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.053989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.053998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb36510) 00:15:28.241 [2024-07-15 22:41:46.054009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.241 [2024-07-15 22:41:46.054038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99680, cid 5, qid 0 00:15:28.241 [2024-07-15 22:41:46.054045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99500, cid 4, qid 0 00:15:28.241 [2024-07-15 22:41:46.054051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99800, cid 6, qid 0 00:15:28.241 [2024-07-15 22:41:46.054056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99980, cid 7, qid 0 00:15:28.241 [2024-07-15 22:41:46.054187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.241 [2024-07-15 22:41:46.054200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.241 [2024-07-15 22:41:46.054204] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054208] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=8192, cccid=5 00:15:28.241 [2024-07-15 22:41:46.054214] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99680) on tqpair(0xb36510): expected_datao=0, payload_size=8192 00:15:28.241 [2024-07-15 22:41:46.054230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054248] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054254] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.241 [2024-07-15 22:41:46.054267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.241 [2024-07-15 22:41:46.054271] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054275] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=512, cccid=4 00:15:28.241 [2024-07-15 22:41:46.054279] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99500) on tqpair(0xb36510): expected_datao=0, payload_size=512 00:15:28.241 [2024-07-15 22:41:46.054284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054291] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054295] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.241 [2024-07-15 22:41:46.054307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.241 [2024-07-15 22:41:46.054311] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054315] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=512, cccid=6 00:15:28.241 [2024-07-15 22:41:46.054320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99800) on tqpair(0xb36510): expected_datao=0, payload_size=512 00:15:28.241 [2024-07-15 22:41:46.054325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054331] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054335] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:28.241 [2024-07-15 22:41:46.054347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:28.241 [2024-07-15 22:41:46.054350] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054354] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb36510): datao=0, datal=4096, cccid=7 00:15:28.241 [2024-07-15 22:41:46.054359] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb99980) on tqpair(0xb36510): expected_datao=0, payload_size=4096 00:15:28.241 [2024-07-15 22:41:46.054364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054371] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054374] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.054386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.054390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99680) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.054411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.054418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.054422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99500) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.054441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 [2024-07-15 22:41:46.054448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.241 [2024-07-15 22:41:46.054453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.241 [2024-07-15 22:41:46.054459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99800) on tqpair=0xb36510 00:15:28.241 [2024-07-15 22:41:46.054472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.241 ===================================================== 00:15:28.241 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.241 ===================================================== 00:15:28.241 Controller Capabilities/Features 00:15:28.241 ================================ 00:15:28.241 Vendor ID: 8086 00:15:28.241 Subsystem Vendor ID: 8086 00:15:28.241 Serial Number: SPDK00000000000001 00:15:28.241 Model Number: SPDK bdev Controller 00:15:28.241 Firmware Version: 24.09 00:15:28.241 Recommended Arb Burst: 6 00:15:28.241 IEEE OUI Identifier: e4 d2 5c 00:15:28.241 Multi-path I/O 00:15:28.242 May have multiple subsystem ports: Yes 00:15:28.242 May have multiple controllers: Yes 00:15:28.242 Associated with SR-IOV VF: No 00:15:28.242 Max Data Transfer Size: 131072 00:15:28.242 Max Number of Namespaces: 32 00:15:28.242 Max Number of I/O Queues: 127 00:15:28.242 NVMe Specification Version (VS): 1.3 00:15:28.242 NVMe Specification Version (Identify): 1.3 00:15:28.242 Maximum Queue Entries: 128 00:15:28.242 Contiguous Queues Required: Yes 00:15:28.242 Arbitration Mechanisms Supported 00:15:28.242 Weighted Round Robin: Not Supported 00:15:28.242 Vendor Specific: Not Supported 00:15:28.242 Reset Timeout: 15000 ms 00:15:28.242 Doorbell Stride: 4 bytes 00:15:28.242 NVM Subsystem Reset: Not Supported 00:15:28.242 Command Sets Supported 00:15:28.242 NVM Command Set: Supported 00:15:28.242 Boot Partition: Not Supported 00:15:28.242 Memory Page Size Minimum: 4096 bytes 00:15:28.242 Memory Page Size Maximum: 4096 bytes 00:15:28.242 Persistent Memory Region: Not Supported 00:15:28.242 Optional Asynchronous Events Supported 00:15:28.242 Namespace Attribute Notices: Supported 00:15:28.242 Firmware Activation Notices: Not Supported 00:15:28.242 ANA Change Notices: Not Supported 00:15:28.242 PLE Aggregate Log Change Notices: Not Supported 00:15:28.242 LBA Status Info Alert Notices: Not Supported 00:15:28.242 EGE Aggregate Log Change Notices: Not Supported 00:15:28.242 Normal NVM Subsystem Shutdown event: Not Supported 00:15:28.242 Zone Descriptor Change Notices: Not Supported 00:15:28.242 Discovery Log Change Notices: Not Supported 00:15:28.242 Controller Attributes 00:15:28.242 128-bit Host Identifier: Supported 00:15:28.242 Non-Operational Permissive Mode: Not Supported 00:15:28.242 NVM Sets: Not Supported 00:15:28.242 Read Recovery Levels: Not Supported 00:15:28.242 Endurance Groups: Not Supported 00:15:28.242 Predictable Latency Mode: Not Supported 00:15:28.242 Traffic Based Keep ALive: Not Supported 00:15:28.242 Namespace Granularity: Not Supported 00:15:28.242 SQ Associations: Not Supported 00:15:28.242 UUID List: Not Supported 00:15:28.242 Multi-Domain Subsystem: Not Supported 00:15:28.242 Fixed Capacity Management: Not Supported 00:15:28.242 Variable Capacity Management: Not Supported 00:15:28.242 Delete Endurance Group: Not Supported 00:15:28.242 Delete NVM Set: Not Supported 00:15:28.242 Extended LBA Formats Supported: Not Supported 00:15:28.242 Flexible Data Placement Supported: Not Supported 00:15:28.242 00:15:28.242 Controller Memory Buffer Support 00:15:28.242 ================================ 00:15:28.242 Supported: No 00:15:28.242 00:15:28.242 Persistent Memory Region Support 00:15:28.242 ================================ 00:15:28.242 Supported: No 00:15:28.242 00:15:28.242 Admin Command Set Attributes 00:15:28.242 ============================ 00:15:28.242 Security Send/Receive: Not Supported 00:15:28.242 Format NVM: Not Supported 00:15:28.242 Firmware Activate/Download: Not Supported 00:15:28.242 Namespace Management: Not Supported 00:15:28.242 Device Self-Test: Not Supported 00:15:28.242 Directives: Not Supported 00:15:28.242 NVMe-MI: Not Supported 00:15:28.242 Virtualization Management: Not Supported 00:15:28.242 Doorbell Buffer Config: Not Supported 00:15:28.242 Get LBA Status Capability: Not Supported 00:15:28.242 Command & Feature Lockdown Capability: Not Supported 00:15:28.242 Abort Command Limit: 4 00:15:28.242 Async Event Request Limit: 4 00:15:28.242 Number of Firmware Slots: N/A 00:15:28.242 Firmware Slot 1 Read-Only: N/A 00:15:28.242 Firmware Activation Without Reset: N/A 00:15:28.242 Multiple Update Detection Support: N/A 00:15:28.242 Firmware Update Granularity: No Information Provided 00:15:28.242 Per-Namespace SMART Log: No 00:15:28.242 Asymmetric Namespace Access Log Page: Not Supported 00:15:28.242 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:28.242 Command Effects Log Page: Supported 00:15:28.242 Get Log Page Extended Data: Supported 00:15:28.242 Telemetry Log Pages: Not Supported 00:15:28.242 Persistent Event Log Pages: Not Supported 00:15:28.242 Supported Log Pages Log Page: May Support 00:15:28.242 Commands Supported & Effects Log Page: Not Supported 00:15:28.242 Feature Identifiers & Effects Log Page:May Support 00:15:28.242 NVMe-MI Commands & Effects Log Page: May Support 00:15:28.242 Data Area 4 for Telemetry Log: Not Supported 00:15:28.242 Error Log Page Entries Supported: 128 00:15:28.242 Keep Alive: Supported 00:15:28.242 Keep Alive Granularity: 10000 ms 00:15:28.242 00:15:28.242 NVM Command Set Attributes 00:15:28.242 ========================== 00:15:28.242 Submission Queue Entry Size 00:15:28.242 Max: 64 00:15:28.242 Min: 64 00:15:28.242 Completion Queue Entry Size 00:15:28.242 Max: 16 00:15:28.242 Min: 16 00:15:28.242 Number of Namespaces: 32 00:15:28.242 Compare Command: Supported 00:15:28.242 Write Uncorrectable Command: Not Supported 00:15:28.242 Dataset Management Command: Supported 00:15:28.242 Write Zeroes Command: Supported 00:15:28.242 Set Features Save Field: Not Supported 00:15:28.242 Reservations: Supported 00:15:28.242 Timestamp: Not Supported 00:15:28.242 Copy: Supported 00:15:28.242 Volatile Write Cache: Present 00:15:28.242 Atomic Write Unit (Normal): 1 00:15:28.242 Atomic Write Unit (PFail): 1 00:15:28.242 Atomic Compare & Write Unit: 1 00:15:28.242 Fused Compare & Write: Supported 00:15:28.242 Scatter-Gather List 00:15:28.242 SGL Command Set: Supported 00:15:28.242 SGL Keyed: Supported 00:15:28.242 SGL Bit Bucket Descriptor: Not Supported 00:15:28.242 SGL Metadata Pointer: Not Supported 00:15:28.242 Oversized SGL: Not Supported 00:15:28.242 SGL Metadata Address: Not Supported 00:15:28.242 SGL Offset: Supported 00:15:28.242 Transport SGL Data Block: Not Supported 00:15:28.242 Replay Protected Memory Block: Not Supported 00:15:28.242 00:15:28.242 Firmware Slot Information 00:15:28.242 ========================= 00:15:28.242 Active slot: 1 00:15:28.242 Slot 1 Firmware Revision: 24.09 00:15:28.242 00:15:28.242 00:15:28.242 Commands Supported and Effects 00:15:28.242 ============================== 00:15:28.242 Admin Commands 00:15:28.242 -------------- 00:15:28.242 Get Log Page (02h): Supported 00:15:28.242 Identify (06h): Supported 00:15:28.242 Abort (08h): Supported 00:15:28.242 Set Features (09h): Supported 00:15:28.242 Get Features (0Ah): Supported 00:15:28.242 Asynchronous Event Request (0Ch): Supported 00:15:28.242 Keep Alive (18h): Supported 00:15:28.242 I/O Commands 00:15:28.242 ------------ 00:15:28.242 Flush (00h): Supported LBA-Change 00:15:28.242 Write (01h): Supported LBA-Change 00:15:28.242 Read (02h): Supported 00:15:28.242 Compare (05h): Supported 00:15:28.242 Write Zeroes (08h): Supported LBA-Change 00:15:28.242 Dataset Management (09h): Supported LBA-Change 00:15:28.242 Copy (19h): Supported LBA-Change 00:15:28.242 00:15:28.242 Error Log 00:15:28.242 ========= 00:15:28.242 00:15:28.242 Arbitration 00:15:28.242 =========== 00:15:28.242 Arbitration Burst: 1 00:15:28.242 00:15:28.242 Power Management 00:15:28.242 ================ 00:15:28.242 Number of Power States: 1 00:15:28.242 Current Power State: Power State #0 00:15:28.242 Power State #0: 00:15:28.242 Max Power: 0.00 W 00:15:28.242 Non-Operational State: Operational 00:15:28.242 Entry Latency: Not Reported 00:15:28.242 Exit Latency: Not Reported 00:15:28.242 Relative Read Throughput: 0 00:15:28.242 Relative Read Latency: 0 00:15:28.242 Relative Write Throughput: 0 00:15:28.242 Relative Write Latency: 0 00:15:28.242 Idle Power: Not Reported 00:15:28.242 Active Power: Not Reported 00:15:28.242 Non-Operational Permissive Mode: Not Supported 00:15:28.242 00:15:28.242 Health Information 00:15:28.242 ================== 00:15:28.242 Critical Warnings: 00:15:28.242 Available Spare Space: OK 00:15:28.242 Temperature: OK 00:15:28.242 Device Reliability: OK 00:15:28.242 Read Only: No 00:15:28.242 Volatile Memory Backup: OK 00:15:28.242 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:28.242 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:28.242 Available Spare: 0% 00:15:28.242 Available Spare Threshold: 0% 00:15:28.242 Life Percentage Used:[2024-07-15 22:41:46.054481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.242 [2024-07-15 22:41:46.054485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.242 [2024-07-15 22:41:46.054489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99980) on tqpair=0xb36510 00:15:28.242 [2024-07-15 22:41:46.054614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.242 [2024-07-15 22:41:46.054622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb36510) 00:15:28.242 [2024-07-15 22:41:46.054631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.242 [2024-07-15 22:41:46.054656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99980, cid 7, qid 0 00:15:28.242 [2024-07-15 22:41:46.054700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.242 [2024-07-15 22:41:46.054707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.242 [2024-07-15 22:41:46.054711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.242 [2024-07-15 22:41:46.054715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99980) on tqpair=0xb36510 00:15:28.242 [2024-07-15 22:41:46.054759] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:28.242 [2024-07-15 22:41:46.054772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb98f00) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.054780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.243 [2024-07-15 22:41:46.054786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99080) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.243 [2024-07-15 22:41:46.054796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99200) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.054801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.243 [2024-07-15 22:41:46.054806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.054811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.243 [2024-07-15 22:41:46.054820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.054825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.054829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.054837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.054860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.058889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.058907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.058912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.058917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.058928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.058933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.058937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.058948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.058979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059115] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:28.243 [2024-07-15 22:41:46.059120] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:28.243 [2024-07-15 22:41:46.059131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.059909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.059922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.059926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.243 [2024-07-15 22:41:46.059943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.059952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.243 [2024-07-15 22:41:46.059959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.243 [2024-07-15 22:41:46.059979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.243 [2024-07-15 22:41:46.060025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.243 [2024-07-15 22:41:46.060031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.243 [2024-07-15 22:41:46.060035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.243 [2024-07-15 22:41:46.060040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.060923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.060930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.060934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.060949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.060958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.060966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.060986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.061035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.061042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.061046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.061061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.061077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.061094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.061143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.061150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.061153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.061169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.061186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.061202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.061247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.061254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.061258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.061279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.061295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.061312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.061354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.061361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.061365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.244 [2024-07-15 22:41:46.061381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.244 [2024-07-15 22:41:46.061390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.244 [2024-07-15 22:41:46.061397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.244 [2024-07-15 22:41:46.061414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.244 [2024-07-15 22:41:46.061459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.244 [2024-07-15 22:41:46.061466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.244 [2024-07-15 22:41:46.061470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.061485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.061501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.061518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.061561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.061574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.061579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.061595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.061612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.061631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.061679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.061692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.061697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.061713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.061730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.061747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.061793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.061799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.061803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.061818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.061835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.061852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.061913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.061922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.061926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.061941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.061950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.061958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.061977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.245 [2024-07-15 22:41:46.062663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.245 [2024-07-15 22:41:46.062704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.245 [2024-07-15 22:41:46.062716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.245 [2024-07-15 22:41:46.062720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.245 [2024-07-15 22:41:46.062736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.245 [2024-07-15 22:41:46.062745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.245 [2024-07-15 22:41:46.062753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.503 [2024-07-15 22:41:46.062771] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.503 [2024-07-15 22:41:46.062812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.503 [2024-07-15 22:41:46.062824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.503 [2024-07-15 22:41:46.062828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.503 [2024-07-15 22:41:46.062833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.503 [2024-07-15 22:41:46.062844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:28.503 [2024-07-15 22:41:46.062849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:28.503 [2024-07-15 22:41:46.062853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb36510) 00:15:28.503 [2024-07-15 22:41:46.062861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.504 [2024-07-15 22:41:46.066927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb99380, cid 3, qid 0 00:15:28.504 [2024-07-15 22:41:46.066981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:28.504 [2024-07-15 22:41:46.066989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:28.504 [2024-07-15 22:41:46.066993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:28.504 [2024-07-15 22:41:46.066997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb99380) on tqpair=0xb36510 00:15:28.504 [2024-07-15 22:41:46.067007] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:28.504 0% 00:15:28.504 Data Units Read: 0 00:15:28.504 Data Units Written: 0 00:15:28.504 Host Read Commands: 0 00:15:28.504 Host Write Commands: 0 00:15:28.504 Controller Busy Time: 0 minutes 00:15:28.504 Power Cycles: 0 00:15:28.504 Power On Hours: 0 hours 00:15:28.504 Unsafe Shutdowns: 0 00:15:28.504 Unrecoverable Media Errors: 0 00:15:28.504 Lifetime Error Log Entries: 0 00:15:28.504 Warning Temperature Time: 0 minutes 00:15:28.504 Critical Temperature Time: 0 minutes 00:15:28.504 00:15:28.504 Number of Queues 00:15:28.504 ================ 00:15:28.504 Number of I/O Submission Queues: 127 00:15:28.504 Number of I/O Completion Queues: 127 00:15:28.504 00:15:28.504 Active Namespaces 00:15:28.504 ================= 00:15:28.504 Namespace ID:1 00:15:28.504 Error Recovery Timeout: Unlimited 00:15:28.504 Command Set Identifier: NVM (00h) 00:15:28.504 Deallocate: Supported 00:15:28.504 Deallocated/Unwritten Error: Not Supported 00:15:28.504 Deallocated Read Value: Unknown 00:15:28.504 Deallocate in Write Zeroes: Not Supported 00:15:28.504 Deallocated Guard Field: 0xFFFF 00:15:28.504 Flush: Supported 00:15:28.504 Reservation: Supported 00:15:28.504 Namespace Sharing Capabilities: Multiple Controllers 00:15:28.504 Size (in LBAs): 131072 (0GiB) 00:15:28.504 Capacity (in LBAs): 131072 (0GiB) 00:15:28.504 Utilization (in LBAs): 131072 (0GiB) 00:15:28.504 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:28.504 EUI64: ABCDEF0123456789 00:15:28.504 UUID: d2d33570-26d2-464d-b60d-5fe7bca91a7b 00:15:28.504 Thin Provisioning: Not Supported 00:15:28.504 Per-NS Atomic Units: Yes 00:15:28.504 Atomic Boundary Size (Normal): 0 00:15:28.504 Atomic Boundary Size (PFail): 0 00:15:28.504 Atomic Boundary Offset: 0 00:15:28.504 Maximum Single Source Range Length: 65535 00:15:28.504 Maximum Copy Length: 65535 00:15:28.504 Maximum Source Range Count: 1 00:15:28.504 NGUID/EUI64 Never Reused: No 00:15:28.504 Namespace Write Protected: No 00:15:28.504 Number of LBA Formats: 1 00:15:28.504 Current LBA Format: LBA Format #00 00:15:28.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:28.504 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.504 rmmod nvme_tcp 00:15:28.504 rmmod nvme_fabrics 00:15:28.504 rmmod nvme_keyring 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74987 ']' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74987 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74987 ']' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74987 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74987 00:15:28.504 killing process with pid 74987 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74987' 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74987 00:15:28.504 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74987 00:15:28.761 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.761 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.761 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.762 ************************************ 00:15:28.762 END TEST nvmf_identify 00:15:28.762 ************************************ 00:15:28.762 00:15:28.762 real 0m2.618s 00:15:28.762 user 0m7.184s 00:15:28.762 sys 0m0.634s 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.762 22:41:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.762 22:41:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.762 22:41:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.762 22:41:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.762 22:41:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.762 22:41:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.762 ************************************ 00:15:28.762 START TEST nvmf_perf 00:15:28.762 ************************************ 00:15:28.762 22:41:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:29.020 * Looking for test storage... 00:15:29.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.020 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:29.021 Cannot find device "nvmf_tgt_br" 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.021 Cannot find device "nvmf_tgt_br2" 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:29.021 Cannot find device "nvmf_tgt_br" 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:29.021 Cannot find device "nvmf_tgt_br2" 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.021 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.279 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:29.280 00:15:29.280 --- 10.0.0.2 ping statistics --- 00:15:29.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.280 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:29.280 00:15:29.280 --- 10.0.0.3 ping statistics --- 00:15:29.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.280 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:29.280 00:15:29.280 --- 10.0.0.1 ping statistics --- 00:15:29.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.280 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.280 22:41:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75195 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75195 00:15:29.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75195 ']' 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.280 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.280 [2024-07-15 22:41:47.054738] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:29.280 [2024-07-15 22:41:47.054836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.538 [2024-07-15 22:41:47.192741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.538 [2024-07-15 22:41:47.295849] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.538 [2024-07-15 22:41:47.296106] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.538 [2024-07-15 22:41:47.296242] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.538 [2024-07-15 22:41:47.296298] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.538 [2024-07-15 22:41:47.296329] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.538 [2024-07-15 22:41:47.296504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.538 [2024-07-15 22:41:47.296557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.538 [2024-07-15 22:41:47.296662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.538 [2024-07-15 22:41:47.296672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.538 [2024-07-15 22:41:47.367215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.474 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.474 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:30.474 22:41:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.474 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.474 22:41:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:30.474 22:41:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.474 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:30.474 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:30.782 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:30.782 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:31.384 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:31.384 22:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.643 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:31.643 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:31.643 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:31.643 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:31.643 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.901 [2024-07-15 22:41:49.596786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.901 22:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.466 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:32.466 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.723 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:32.723 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:32.981 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.239 [2024-07-15 22:41:50.902333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.239 22:41:50 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.497 22:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:33.497 22:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:33.497 22:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:33.497 22:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:34.872 Initializing NVMe Controllers 00:15:34.872 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:34.872 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:34.872 Initialization complete. Launching workers. 00:15:34.872 ======================================================== 00:15:34.872 Latency(us) 00:15:34.872 Device Information : IOPS MiB/s Average min max 00:15:34.872 PCIE (0000:00:10.0) NSID 1 from core 0: 24669.02 96.36 1296.98 300.83 6143.18 00:15:34.872 ======================================================== 00:15:34.872 Total : 24669.02 96.36 1296.98 300.83 6143.18 00:15:34.872 00:15:34.872 22:41:52 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:35.811 Initializing NVMe Controllers 00:15:35.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:35.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:35.811 Initialization complete. Launching workers. 00:15:35.811 ======================================================== 00:15:35.811 Latency(us) 00:15:35.811 Device Information : IOPS MiB/s Average min max 00:15:35.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2815.60 11.00 354.79 119.67 7191.75 00:15:35.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.63 0.48 8218.23 7916.05 12001.26 00:15:35.811 ======================================================== 00:15:35.811 Total : 2938.23 11.48 682.99 119.67 12001.26 00:15:35.811 00:15:36.068 22:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:37.441 Initializing NVMe Controllers 00:15:37.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:37.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:37.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:37.441 Initialization complete. Launching workers. 00:15:37.441 ======================================================== 00:15:37.441 Latency(us) 00:15:37.441 Device Information : IOPS MiB/s Average min max 00:15:37.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8619.19 33.67 3713.22 596.88 7822.90 00:15:37.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3965.52 15.49 8082.17 6679.83 16152.21 00:15:37.441 ======================================================== 00:15:37.441 Total : 12584.71 49.16 5089.91 596.88 16152.21 00:15:37.441 00:15:37.441 22:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:37.441 22:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:39.968 Initializing NVMe Controllers 00:15:39.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.968 Controller IO queue size 128, less than required. 00:15:39.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.968 Controller IO queue size 128, less than required. 00:15:39.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:39.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:39.968 Initialization complete. Launching workers. 00:15:39.968 ======================================================== 00:15:39.968 Latency(us) 00:15:39.968 Device Information : IOPS MiB/s Average min max 00:15:39.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1611.89 402.97 80570.69 45661.59 124197.65 00:15:39.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.77 159.69 203182.75 85479.43 326514.71 00:15:39.968 ======================================================== 00:15:39.968 Total : 2250.65 562.66 115369.68 45661.59 326514.71 00:15:39.968 00:15:39.968 22:41:57 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:40.226 Initializing NVMe Controllers 00:15:40.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.226 Controller IO queue size 128, less than required. 00:15:40.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:40.226 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:40.226 Controller IO queue size 128, less than required. 00:15:40.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:40.226 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:40.226 WARNING: Some requested NVMe devices were skipped 00:15:40.226 No valid NVMe controllers or AIO or URING devices found 00:15:40.226 22:41:57 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:42.756 Initializing NVMe Controllers 00:15:42.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:42.756 Controller IO queue size 128, less than required. 00:15:42.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:42.756 Controller IO queue size 128, less than required. 00:15:42.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:42.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:42.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:42.756 Initialization complete. Launching workers. 00:15:42.756 00:15:42.756 ==================== 00:15:42.756 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:42.756 TCP transport: 00:15:42.756 polls: 10449 00:15:42.756 idle_polls: 6714 00:15:42.756 sock_completions: 3735 00:15:42.756 nvme_completions: 6543 00:15:42.756 submitted_requests: 9810 00:15:42.756 queued_requests: 1 00:15:42.756 00:15:42.756 ==================== 00:15:42.756 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:42.756 TCP transport: 00:15:42.756 polls: 12995 00:15:42.756 idle_polls: 8788 00:15:42.756 sock_completions: 4207 00:15:42.756 nvme_completions: 6987 00:15:42.756 submitted_requests: 10478 00:15:42.756 queued_requests: 1 00:15:42.756 ======================================================== 00:15:42.756 Latency(us) 00:15:42.756 Device Information : IOPS MiB/s Average min max 00:15:42.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1630.67 407.67 80131.45 36799.06 128134.64 00:15:42.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1741.35 435.34 74408.96 25782.34 169968.11 00:15:42.756 ======================================================== 00:15:42.756 Total : 3372.02 843.01 77176.30 25782.34 169968.11 00:15:42.756 00:15:42.756 22:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:42.756 22:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.015 rmmod nvme_tcp 00:15:43.015 rmmod nvme_fabrics 00:15:43.015 rmmod nvme_keyring 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75195 ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75195 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75195 ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75195 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75195 00:15:43.015 killing process with pid 75195 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75195' 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75195 00:15:43.015 22:42:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75195 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.960 ************************************ 00:15:43.960 END TEST nvmf_perf 00:15:43.960 ************************************ 00:15:43.960 00:15:43.960 real 0m15.029s 00:15:43.960 user 0m56.274s 00:15:43.960 sys 0m4.143s 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.960 22:42:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:43.960 22:42:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.960 22:42:01 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.960 22:42:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.960 22:42:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.960 22:42:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.960 ************************************ 00:15:43.960 START TEST nvmf_fio_host 00:15:43.960 ************************************ 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.960 * Looking for test storage... 00:15:43.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.960 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.961 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:44.218 Cannot find device "nvmf_tgt_br" 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.218 Cannot find device "nvmf_tgt_br2" 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:44.218 Cannot find device "nvmf_tgt_br" 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:44.218 Cannot find device "nvmf_tgt_br2" 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.218 22:42:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.218 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:44.475 00:15:44.475 --- 10.0.0.2 ping statistics --- 00:15:44.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.475 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:44.475 00:15:44.475 --- 10.0.0.3 ping statistics --- 00:15:44.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.475 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:44.475 00:15:44.475 --- 10.0.0.1 ping statistics --- 00:15:44.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.475 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75606 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75606 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75606 ']' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.475 22:42:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 [2024-07-15 22:42:02.207188] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:44.475 [2024-07-15 22:42:02.207473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.733 [2024-07-15 22:42:02.348353] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.733 [2024-07-15 22:42:02.470901] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.733 [2024-07-15 22:42:02.470961] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.733 [2024-07-15 22:42:02.470989] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.733 [2024-07-15 22:42:02.470999] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.733 [2024-07-15 22:42:02.471009] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.733 [2024-07-15 22:42:02.471805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.733 [2024-07-15 22:42:02.471961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.733 [2024-07-15 22:42:02.472208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.733 [2024-07-15 22:42:02.472053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.733 [2024-07-15 22:42:02.528705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:45.666 [2024-07-15 22:42:03.426475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.666 22:42:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.923 Malloc1 00:15:46.180 22:42:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.438 22:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.695 22:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.695 [2024-07-15 22:42:04.492829] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.695 22:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.954 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:47.212 22:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:47.212 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:47.212 fio-3.35 00:15:47.212 Starting 1 thread 00:15:49.739 00:15:49.739 test: (groupid=0, jobs=1): err= 0: pid=75689: Mon Jul 15 22:42:07 2024 00:15:49.739 read: IOPS=8588, BW=33.5MiB/s (35.2MB/s)(67.5MiB/2011msec) 00:15:49.739 slat (usec): min=2, max=250, avg= 2.76, stdev= 2.68 00:15:49.739 clat (usec): min=3102, max=21450, avg=7741.15, stdev=1071.69 00:15:49.739 lat (usec): min=3140, max=21453, avg=7743.90, stdev=1071.82 00:15:49.739 clat percentiles (usec): 00:15:49.739 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:15:49.739 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:15:49.739 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 9896], 00:15:49.739 | 99.00th=[12387], 99.50th=[13304], 99.90th=[16581], 99.95th=[19006], 00:15:49.739 | 99.99th=[21365] 00:15:49.739 bw ( KiB/s): min=31496, max=36000, per=100.00%, avg=34430.00, stdev=2039.59, samples=4 00:15:49.739 iops : min= 7874, max= 9000, avg=8607.50, stdev=509.90, samples=4 00:15:49.739 write: IOPS=8589, BW=33.6MiB/s (35.2MB/s)(67.5MiB/2011msec); 0 zone resets 00:15:49.739 slat (usec): min=2, max=1102, avg= 3.00, stdev= 9.70 00:15:49.739 clat (usec): min=2978, max=21305, avg=7085.36, stdev=1028.18 00:15:49.739 lat (usec): min=2987, max=21307, avg=7088.36, stdev=1028.38 00:15:49.739 clat percentiles (usec): 00:15:49.739 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:15:49.739 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6980], 00:15:49.739 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7898], 95.00th=[ 9110], 00:15:49.739 | 99.00th=[10945], 99.50th=[11731], 99.90th=[18482], 99.95th=[19006], 00:15:49.739 | 99.99th=[21365] 00:15:49.739 bw ( KiB/s): min=32272, max=35904, per=100.00%, avg=34406.00, stdev=1552.57, samples=4 00:15:49.739 iops : min= 8068, max= 8976, avg=8601.50, stdev=388.14, samples=4 00:15:49.739 lat (msec) : 4=0.06%, 10=96.80%, 20=3.09%, 50=0.05% 00:15:49.739 cpu : usr=67.41%, sys=23.73%, ctx=39, majf=0, minf=7 00:15:49.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:49.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.739 issued rwts: total=17272,17273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.739 00:15:49.739 Run status group 0 (all jobs): 00:15:49.739 READ: bw=33.5MiB/s (35.2MB/s), 33.5MiB/s-33.5MiB/s (35.2MB/s-35.2MB/s), io=67.5MiB (70.7MB), run=2011-2011msec 00:15:49.739 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.5MiB (70.8MB), run=2011-2011msec 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:49.739 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.740 22:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.740 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:49.740 fio-3.35 00:15:49.740 Starting 1 thread 00:15:52.288 00:15:52.288 test: (groupid=0, jobs=1): err= 0: pid=75733: Mon Jul 15 22:42:09 2024 00:15:52.288 read: IOPS=7963, BW=124MiB/s (130MB/s)(250MiB/2010msec) 00:15:52.288 slat (usec): min=3, max=121, avg= 4.06, stdev= 1.92 00:15:52.288 clat (usec): min=3358, max=19142, avg=8860.90, stdev=2632.00 00:15:52.288 lat (usec): min=3362, max=19145, avg=8864.96, stdev=2632.06 00:15:52.288 clat percentiles (usec): 00:15:52.288 | 1.00th=[ 4424], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:15:52.288 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:52.288 | 70.00th=[10028], 80.00th=[11207], 90.00th=[12256], 95.00th=[13435], 00:15:52.288 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:15:52.288 | 99.99th=[19006] 00:15:52.288 bw ( KiB/s): min=57824, max=71520, per=51.13%, avg=65152.00, stdev=5627.51, samples=4 00:15:52.288 iops : min= 3614, max= 4470, avg=4072.00, stdev=351.72, samples=4 00:15:52.288 write: IOPS=4741, BW=74.1MiB/s (77.7MB/s)(133MiB/1795msec); 0 zone resets 00:15:52.288 slat (usec): min=36, max=249, avg=39.20, stdev= 5.27 00:15:52.288 clat (usec): min=3252, max=20536, avg=12875.17, stdev=2076.69 00:15:52.288 lat (usec): min=3289, max=20574, avg=12914.37, stdev=2076.93 00:15:52.288 clat percentiles (usec): 00:15:52.288 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:15:52.288 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:15:52.288 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15664], 95.00th=[16581], 00:15:52.288 | 99.00th=[18744], 99.50th=[19530], 99.90th=[19792], 99.95th=[20317], 00:15:52.288 | 99.99th=[20579] 00:15:52.288 bw ( KiB/s): min=59392, max=74784, per=89.52%, avg=67912.00, stdev=6357.32, samples=4 00:15:52.288 iops : min= 3712, max= 4674, avg=4244.50, stdev=397.33, samples=4 00:15:52.288 lat (msec) : 4=0.24%, 10=47.61%, 20=52.12%, 50=0.02% 00:15:52.288 cpu : usr=76.95%, sys=17.62%, ctx=51, majf=0, minf=20 00:15:52.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:52.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:52.288 issued rwts: total=16007,8511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:52.288 00:15:52.288 Run status group 0 (all jobs): 00:15:52.288 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=250MiB (262MB), run=2010-2010msec 00:15:52.288 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=133MiB (139MB), run=1795-1795msec 00:15:52.288 22:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.288 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.288 rmmod nvme_tcp 00:15:52.288 rmmod nvme_fabrics 00:15:52.288 rmmod nvme_keyring 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75606 ']' 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75606 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75606 ']' 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75606 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75606 00:15:52.546 killing process with pid 75606 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75606' 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75606 00:15:52.546 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75606 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:52.806 ************************************ 00:15:52.806 END TEST nvmf_fio_host 00:15:52.806 ************************************ 00:15:52.806 00:15:52.806 real 0m8.786s 00:15:52.806 user 0m35.578s 00:15:52.806 sys 0m2.547s 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.806 22:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.806 22:42:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:52.806 22:42:10 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:52.806 22:42:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:52.806 22:42:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.806 22:42:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:52.806 ************************************ 00:15:52.806 START TEST nvmf_failover 00:15:52.806 ************************************ 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:52.806 * Looking for test storage... 00:15:52.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.806 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.118 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.119 Cannot find device "nvmf_tgt_br" 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.119 Cannot find device "nvmf_tgt_br2" 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.119 Cannot find device "nvmf_tgt_br" 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.119 Cannot find device "nvmf_tgt_br2" 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:53.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:15:53.119 00:15:53.119 --- 10.0.0.2 ping statistics --- 00:15:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.119 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:53.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:53.119 00:15:53.119 --- 10.0.0.3 ping statistics --- 00:15:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.119 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:15:53.119 00:15:53.119 --- 10.0.0.1 ping statistics --- 00:15:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.119 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.119 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.378 22:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75947 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75947 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75947 ']' 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.379 22:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:53.379 [2024-07-15 22:42:11.024593] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:53.379 [2024-07-15 22:42:11.024704] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.379 [2024-07-15 22:42:11.160489] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.637 [2024-07-15 22:42:11.278544] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.637 [2024-07-15 22:42:11.278818] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.637 [2024-07-15 22:42:11.278977] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.637 [2024-07-15 22:42:11.279116] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.637 [2024-07-15 22:42:11.279153] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.637 [2024-07-15 22:42:11.279378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.637 [2024-07-15 22:42:11.279459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.637 [2024-07-15 22:42:11.279464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.637 [2024-07-15 22:42:11.334629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.572 [2024-07-15 22:42:12.306724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.572 22:42:12 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:54.831 Malloc0 00:15:55.090 22:42:12 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.348 22:42:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.613 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.613 [2024-07-15 22:42:13.407706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.613 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:55.873 [2024-07-15 22:42:13.695866] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:56.131 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:56.131 [2024-07-15 22:42:13.948233] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:56.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76006 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76006 /var/tmp/bdevperf.sock 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76006 ']' 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.389 22:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:57.323 22:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.323 22:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:57.323 22:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:57.890 NVMe0n1 00:15:57.890 22:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:58.149 00:15:58.149 22:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76025 00:15:58.149 22:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.149 22:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:59.084 22:42:16 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.343 [2024-07-15 22:42:17.055684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.055991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.343 [2024-07-15 22:42:17.056238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 [2024-07-15 22:42:17.056651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7fb70 is same with the state(5) to be set 00:15:59.344 22:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:02.632 22:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.632 00:16:02.891 22:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:03.149 22:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:06.442 22:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.442 [2024-07-15 22:42:24.071841] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.442 22:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:07.376 22:42:25 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:07.634 22:42:25 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 76025 00:16:14.196 0 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 76006 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76006 ']' 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76006 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76006 00:16:14.196 killing process with pid 76006 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76006' 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76006 00:16:14.196 22:42:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76006 00:16:14.196 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.196 [2024-07-15 22:42:14.050123] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:14.196 [2024-07-15 22:42:14.050386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:16:14.196 [2024-07-15 22:42:14.201652] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.196 [2024-07-15 22:42:14.363773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.196 [2024-07-15 22:42:14.439385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:14.196 Running I/O for 15 seconds... 00:16:14.196 [2024-07-15 22:42:17.056905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.056995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.057975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.196 [2024-07-15 22:42:17.057991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.196 [2024-07-15 22:42:17.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.058954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.058969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.197 [2024-07-15 22:42:17.059573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.197 [2024-07-15 22:42:17.059587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.059976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.059989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.060017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.198 [2024-07-15 22:42:17.060046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.198 [2024-07-15 22:42:17.060616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.198 [2024-07-15 22:42:17.060629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:17.060831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.060860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.060902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.060930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.060967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.060982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.060995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:17.061024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111bd80 is same with the state(5) to be set 00:16:14.199 [2024-07-15 22:42:17.061060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.199 [2024-07-15 22:42:17.061070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.199 [2024-07-15 22:42:17.061086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:16:14.199 [2024-07-15 22:42:17.061099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061157] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x111bd80 was disconnected and freed. reset controller. 00:16:14.199 [2024-07-15 22:42:17.061192] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:14.199 [2024-07-15 22:42:17.061262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.199 [2024-07-15 22:42:17.061283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.199 [2024-07-15 22:42:17.061312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.199 [2024-07-15 22:42:17.061340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.199 [2024-07-15 22:42:17.061367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:17.061381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:14.199 [2024-07-15 22:42:17.061426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb710 (9): Bad file descriptor 00:16:14.199 [2024-07-15 22:42:17.065263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.199 [2024-07-15 22:42:17.104993] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.199 [2024-07-15 22:42:20.756946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.199 [2024-07-15 22:42:20.757530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.199 [2024-07-15 22:42:20.757640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.199 [2024-07-15 22:42:20.757654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.757683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.757713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.757743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.757775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.757971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.757985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.758044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.200 [2024-07-15 22:42:20.758308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.758338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.758367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.200 [2024-07-15 22:42:20.758383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.200 [2024-07-15 22:42:20.758397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.201 [2024-07-15 22:42:20.758791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.758983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.758998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.201 [2024-07-15 22:42:20.759319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.201 [2024-07-15 22:42:20.759335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.759971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.759985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.760015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.202 [2024-07-15 22:42:20.760049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.202 [2024-07-15 22:42:20.760251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.202 [2024-07-15 22:42:20.760265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.203 [2024-07-15 22:42:20.760776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.760984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.203 [2024-07-15 22:42:20.760997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.761047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.203 [2024-07-15 22:42:20.761075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.203 [2024-07-15 22:42:20.761088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:16:14.203 [2024-07-15 22:42:20.761102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.761162] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x111dd40 was disconnected and freed. reset controller. 00:16:14.203 [2024-07-15 22:42:20.761180] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:14.203 [2024-07-15 22:42:20.761235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.203 [2024-07-15 22:42:20.761257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.761273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.203 [2024-07-15 22:42:20.761287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.761302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.203 [2024-07-15 22:42:20.761316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.203 [2024-07-15 22:42:20.761330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.204 [2024-07-15 22:42:20.761344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:20.761358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:14.204 [2024-07-15 22:42:20.761393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb710 (9): Bad file descriptor 00:16:14.204 [2024-07-15 22:42:20.765207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.204 [2024-07-15 22:42:20.806127] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.204 [2024-07-15 22:42:25.380625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.380977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.380993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.204 [2024-07-15 22:42:25.381013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.204 [2024-07-15 22:42:25.381608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.204 [2024-07-15 22:42:25.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.381974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.381990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.205 [2024-07-15 22:42:25.382795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.205 [2024-07-15 22:42:25.382933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.205 [2024-07-15 22:42:25.382949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.382963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.382993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.206 [2024-07-15 22:42:25.383857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.383975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.383989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.384008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.384022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.384037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.206 [2024-07-15 22:42:25.384053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.206 [2024-07-15 22:42:25.384069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.207 [2024-07-15 22:42:25.384355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.207 [2024-07-15 22:42:25.384805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.384879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.207 [2024-07-15 22:42:25.384898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.207 [2024-07-15 22:42:25.384910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29176 len:8 PRP1 0x0 PRP2 0x0 00:16:14.207 [2024-07-15 22:42:25.384924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.385006] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x113a330 was disconnected and freed. reset controller. 00:16:14.207 [2024-07-15 22:42:25.385026] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:14.207 [2024-07-15 22:42:25.385100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.207 [2024-07-15 22:42:25.385122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.385143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.207 [2024-07-15 22:42:25.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.385187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.207 [2024-07-15 22:42:25.385200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.385215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.207 [2024-07-15 22:42:25.385237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.207 [2024-07-15 22:42:25.385252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:14.207 [2024-07-15 22:42:25.389206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.207 [2024-07-15 22:42:25.389260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb710 (9): Bad file descriptor 00:16:14.207 [2024-07-15 22:42:25.421150] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.207 00:16:14.207 Latency(us) 00:16:14.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.207 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.207 Verification LBA range: start 0x0 length 0x4000 00:16:14.207 NVMe0n1 : 15.01 8853.81 34.59 232.20 0.00 14054.84 644.19 16681.89 00:16:14.207 =================================================================================================================== 00:16:14.207 Total : 8853.81 34.59 232.20 0.00 14054.84 644.19 16681.89 00:16:14.207 Received shutdown signal, test time was about 15.000000 seconds 00:16:14.207 00:16:14.207 Latency(us) 00:16:14.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.207 =================================================================================================================== 00:16:14.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76199 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76199 /var/tmp/bdevperf.sock 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76199 ']' 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.207 22:42:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:14.465 22:42:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.465 22:42:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:14.465 22:42:32 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:14.731 [2024-07-15 22:42:32.521501] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:14.731 22:42:32 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:14.988 [2024-07-15 22:42:32.769698] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:14.988 22:42:32 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.556 NVMe0n1 00:16:15.556 22:42:33 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.813 00:16:15.813 22:42:33 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:16.071 00:16:16.071 22:42:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:16.329 22:42:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:16.329 22:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:16.894 22:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:20.191 22:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.191 22:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:20.191 22:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76287 00:16:20.191 22:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:20.191 22:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76287 00:16:21.126 0 00:16:21.126 22:42:38 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.126 [2024-07-15 22:42:31.248178] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:21.126 [2024-07-15 22:42:31.248294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76199 ] 00:16:21.126 [2024-07-15 22:42:31.383744] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.126 [2024-07-15 22:42:31.514468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.126 [2024-07-15 22:42:31.571654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:21.126 [2024-07-15 22:42:34.399190] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:21.126 [2024-07-15 22:42:34.399345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.126 [2024-07-15 22:42:34.399370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.126 [2024-07-15 22:42:34.399389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.126 [2024-07-15 22:42:34.399402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.126 [2024-07-15 22:42:34.399417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.126 [2024-07-15 22:42:34.399442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.126 [2024-07-15 22:42:34.399457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.126 [2024-07-15 22:42:34.399470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.126 [2024-07-15 22:42:34.399483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:21.126 [2024-07-15 22:42:34.399543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:21.126 [2024-07-15 22:42:34.399588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe7710 (9): Bad file descriptor 00:16:21.126 [2024-07-15 22:42:34.405152] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.126 Running I/O for 1 seconds... 00:16:21.126 00:16:21.126 Latency(us) 00:16:21.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.126 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.126 Verification LBA range: start 0x0 length 0x4000 00:16:21.126 NVMe0n1 : 1.01 7734.61 30.21 0.00 0.00 16438.90 1325.61 17515.99 00:16:21.126 =================================================================================================================== 00:16:21.126 Total : 7734.61 30.21 0.00 0.00 16438.90 1325.61 17515.99 00:16:21.126 22:42:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:21.127 22:42:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:21.765 22:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:21.765 22:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:21.765 22:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:22.025 22:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:22.283 22:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76199 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76199 ']' 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76199 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:25.569 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76199 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.843 killing process with pid 76199 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76199' 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76199 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76199 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:25.843 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.407 22:42:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.407 rmmod nvme_tcp 00:16:26.407 rmmod nvme_fabrics 00:16:26.407 rmmod nvme_keyring 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75947 ']' 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75947 00:16:26.407 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75947 ']' 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75947 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75947 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:26.408 killing process with pid 75947 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75947' 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75947 00:16:26.408 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75947 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:26.666 00:16:26.666 real 0m33.849s 00:16:26.666 user 2m11.578s 00:16:26.666 sys 0m6.051s 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.666 22:42:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:26.666 ************************************ 00:16:26.666 END TEST nvmf_failover 00:16:26.666 ************************************ 00:16:26.666 22:42:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.666 22:42:44 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:26.666 22:42:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.666 22:42:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.666 22:42:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.666 ************************************ 00:16:26.666 START TEST nvmf_host_discovery 00:16:26.666 ************************************ 00:16:26.666 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:26.666 * Looking for test storage... 00:16:26.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:26.925 Cannot find device "nvmf_tgt_br" 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.925 Cannot find device "nvmf_tgt_br2" 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:26.925 Cannot find device "nvmf_tgt_br" 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:26.925 Cannot find device "nvmf_tgt_br2" 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:26.925 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:27.184 00:16:27.184 --- 10.0.0.2 ping statistics --- 00:16:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.184 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:27.184 00:16:27.184 --- 10.0.0.3 ping statistics --- 00:16:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.184 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:27.184 00:16:27.184 --- 10.0.0.1 ping statistics --- 00:16:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.184 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76558 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76558 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76558 ']' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.184 22:42:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.184 [2024-07-15 22:42:44.931232] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:27.184 [2024-07-15 22:42:44.931335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.442 [2024-07-15 22:42:45.070578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.442 [2024-07-15 22:42:45.194459] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.442 [2024-07-15 22:42:45.194548] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.442 [2024-07-15 22:42:45.194563] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.442 [2024-07-15 22:42:45.194574] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.442 [2024-07-15 22:42:45.194583] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.442 [2024-07-15 22:42:45.194613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.442 [2024-07-15 22:42:45.253178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.376 22:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.376 22:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:28.376 22:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.376 22:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.376 22:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 [2024-07-15 22:42:46.011666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 [2024-07-15 22:42:46.019734] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 null0 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 null1 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76591 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76591 /tmp/host.sock 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76591 ']' 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.376 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.376 22:42:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 [2024-07-15 22:42:46.131113] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:28.376 [2024-07-15 22:42:46.131271] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76591 ] 00:16:28.634 [2024-07-15 22:42:46.278999] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.634 [2024-07-15 22:42:46.406076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.634 [2024-07-15 22:42:46.464537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.628 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 [2024-07-15 22:42:47.500222] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.888 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.147 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:30.147 22:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:30.406 [2024-07-15 22:42:48.154762] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:30.406 [2024-07-15 22:42:48.154798] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:30.406 [2024-07-15 22:42:48.154819] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:30.406 [2024-07-15 22:42:48.160829] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:30.406 [2024-07-15 22:42:48.218441] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:30.406 [2024-07-15 22:42:48.218481] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.975 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.234 22:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:31.234 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:31.235 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.493 [2024-07-15 22:42:49.122024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:31.493 [2024-07-15 22:42:49.122766] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:31.493 [2024-07-15 22:42:49.122806] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:31.493 [2024-07-15 22:42:49.128751] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.493 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:31.494 [2024-07-15 22:42:49.188220] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:31.494 [2024-07-15 22:42:49.188252] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:31.494 [2024-07-15 22:42:49.188260] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:31.494 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.752 [2024-07-15 22:42:49.367115] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:31.752 [2024-07-15 22:42:49.367155] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.752 [2024-07-15 22:42:49.367251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.752 [2024-07-15 22:42:49.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.752 [2024-07-15 22:42:49.367302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.752 [2024-07-15 22:42:49.367312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.752 [2024-07-15 22:42:49.367322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.752 [2024-07-15 22:42:49.367332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.752 [2024-07-15 22:42:49.367343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.752 [2024-07-15 22:42:49.367352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.752 [2024-07-15 22:42:49.367362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269fa0 is same with the state(5) to be set 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.752 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:31.752 [2024-07-15 22:42:49.373101] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:31.753 [2024-07-15 22:42:49.373135] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:31.753 [2024-07-15 22:42:49.373212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269fa0 (9): Bad file descriptor 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:31.753 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:32.011 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.012 22:42:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 [2024-07-15 22:42:50.805592] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:33.388 [2024-07-15 22:42:50.805639] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:33.388 [2024-07-15 22:42:50.805661] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:33.388 [2024-07-15 22:42:50.811663] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:33.388 [2024-07-15 22:42:50.872732] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:33.388 [2024-07-15 22:42:50.872818] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 request: 00:16:33.388 { 00:16:33.388 "name": "nvme", 00:16:33.388 "trtype": "tcp", 00:16:33.388 "traddr": "10.0.0.2", 00:16:33.388 "adrfam": "ipv4", 00:16:33.388 "trsvcid": "8009", 00:16:33.388 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:33.388 "wait_for_attach": true, 00:16:33.388 "method": "bdev_nvme_start_discovery", 00:16:33.388 "req_id": 1 00:16:33.388 } 00:16:33.388 Got JSON-RPC error response 00:16:33.388 response: 00:16:33.388 { 00:16:33.388 "code": -17, 00:16:33.388 "message": "File exists" 00:16:33.388 } 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:33.388 22:42:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 request: 00:16:33.388 { 00:16:33.388 "name": "nvme_second", 00:16:33.388 "trtype": "tcp", 00:16:33.388 "traddr": "10.0.0.2", 00:16:33.388 "adrfam": "ipv4", 00:16:33.388 "trsvcid": "8009", 00:16:33.388 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:33.388 "wait_for_attach": true, 00:16:33.388 "method": "bdev_nvme_start_discovery", 00:16:33.388 "req_id": 1 00:16:33.388 } 00:16:33.388 Got JSON-RPC error response 00:16:33.388 response: 00:16:33.388 { 00:16:33.388 "code": -17, 00:16:33.388 "message": "File exists" 00:16:33.388 } 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.388 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.389 22:42:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.766 [2024-07-15 22:42:52.169308] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.766 [2024-07-15 22:42:52.169399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e01a0 with addr=10.0.0.2, port=8010 00:16:34.766 [2024-07-15 22:42:52.169440] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:34.766 [2024-07-15 22:42:52.169451] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:34.766 [2024-07-15 22:42:52.169460] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:35.362 [2024-07-15 22:42:53.169237] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:35.362 [2024-07-15 22:42:53.169323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e01a0 with addr=10.0.0.2, port=8010 00:16:35.362 [2024-07-15 22:42:53.169348] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:35.362 [2024-07-15 22:42:53.169358] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:35.362 [2024-07-15 22:42:53.169367] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:36.739 [2024-07-15 22:42:54.169081] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:36.739 request: 00:16:36.739 { 00:16:36.739 "name": "nvme_second", 00:16:36.739 "trtype": "tcp", 00:16:36.739 "traddr": "10.0.0.2", 00:16:36.739 "adrfam": "ipv4", 00:16:36.739 "trsvcid": "8010", 00:16:36.739 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:36.739 "wait_for_attach": false, 00:16:36.739 "attach_timeout_ms": 3000, 00:16:36.739 "method": "bdev_nvme_start_discovery", 00:16:36.739 "req_id": 1 00:16:36.739 } 00:16:36.739 Got JSON-RPC error response 00:16:36.739 response: 00:16:36.739 { 00:16:36.739 "code": -110, 00:16:36.739 "message": "Connection timed out" 00:16:36.739 } 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76591 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.739 rmmod nvme_tcp 00:16:36.739 rmmod nvme_fabrics 00:16:36.739 rmmod nvme_keyring 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76558 ']' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76558 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76558 ']' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76558 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76558 00:16:36.739 killing process with pid 76558 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76558' 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76558 00:16:36.739 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76558 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:36.999 00:16:36.999 real 0m10.231s 00:16:36.999 user 0m19.826s 00:16:36.999 sys 0m2.034s 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.999 ************************************ 00:16:36.999 END TEST nvmf_host_discovery 00:16:36.999 ************************************ 00:16:36.999 22:42:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:36.999 22:42:54 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:36.999 22:42:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.999 22:42:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.999 22:42:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.999 ************************************ 00:16:36.999 START TEST nvmf_host_multipath_status 00:16:36.999 ************************************ 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:36.999 * Looking for test storage... 00:16:36.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:16:36.999 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:37.000 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:37.258 Cannot find device "nvmf_tgt_br" 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.258 Cannot find device "nvmf_tgt_br2" 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:37.258 Cannot find device "nvmf_tgt_br" 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:37.258 Cannot find device "nvmf_tgt_br2" 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:37.258 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.259 22:42:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:37.259 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:37.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:37.517 00:16:37.517 --- 10.0.0.2 ping statistics --- 00:16:37.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.517 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:37.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:37.517 00:16:37.517 --- 10.0.0.3 ping statistics --- 00:16:37.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.517 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:16:37.517 00:16:37.517 --- 10.0.0.1 ping statistics --- 00:16:37.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.517 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.517 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=77046 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 77046 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77046 ']' 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.518 22:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:37.518 [2024-07-15 22:42:55.254810] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:37.518 [2024-07-15 22:42:55.254939] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.776 [2024-07-15 22:42:55.394846] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:37.776 [2024-07-15 22:42:55.510851] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.776 [2024-07-15 22:42:55.510948] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.776 [2024-07-15 22:42:55.510977] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.776 [2024-07-15 22:42:55.510985] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.776 [2024-07-15 22:42:55.510993] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.776 [2024-07-15 22:42:55.511084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.776 [2024-07-15 22:42:55.511329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.776 [2024-07-15 22:42:55.567649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77046 00:16:38.711 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.969 [2024-07-15 22:42:56.563495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.969 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:39.228 Malloc0 00:16:39.228 22:42:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:39.486 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.744 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.744 [2024-07-15 22:42:57.559054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.744 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:40.003 [2024-07-15 22:42:57.835258] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77107 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77107 /var/tmp/bdevperf.sock 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77107 ']' 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.261 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.262 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.262 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.262 22:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:41.199 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.199 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:41.199 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:41.765 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:42.023 Nvme0n1 00:16:42.023 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:42.281 Nvme0n1 00:16:42.281 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:42.281 22:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:44.182 22:43:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:44.182 22:43:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:44.748 22:43:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:44.748 22:43:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.120 22:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.378 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.378 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.378 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.378 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:46.637 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.637 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:46.637 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.637 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:46.893 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.893 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:46.893 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.893 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.150 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.150 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:47.150 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.150 22:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.714 22:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.714 22:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:47.714 22:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:47.714 22:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:48.318 22:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:49.252 22:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:49.252 22:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:49.252 22:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.252 22:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:49.252 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.252 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:49.252 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.510 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:49.769 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.769 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:49.769 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.769 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.028 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.028 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.028 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.028 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.288 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.288 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:50.288 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.288 22:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:50.548 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.548 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:50.548 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:50.548 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.807 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.807 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:50.807 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:51.067 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:51.327 22:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:52.328 22:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:52.328 22:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:52.328 22:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.328 22:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:52.586 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.586 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:52.586 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.586 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:52.845 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:52.845 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:52.845 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:52.845 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.104 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.104 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:53.104 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:53.104 22:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.362 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.362 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:53.362 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.362 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:53.621 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.621 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:53.621 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.621 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:54.190 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.190 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:54.190 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:54.190 22:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:54.448 22:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:55.383 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:55.383 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:55.383 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.383 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:55.960 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.960 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:55.960 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.961 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:55.961 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.961 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:55.961 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.961 22:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:56.219 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.219 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:56.219 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.219 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.784 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.784 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:56.784 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.784 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:57.043 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.043 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:57.043 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.043 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.302 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.302 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:57.302 22:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:57.561 22:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:57.821 22:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:58.782 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:58.782 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:58.782 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.782 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:59.041 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.041 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:59.041 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.041 22:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:59.299 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.299 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:59.299 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.299 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:59.558 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.558 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:59.558 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.558 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:00.125 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.125 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:00.125 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.125 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:00.384 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.384 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:00.384 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.384 22:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.642 22:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.642 22:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:00.642 22:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:00.901 22:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:01.159 22:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:02.094 22:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:02.094 22:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:02.094 22:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.094 22:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.353 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:02.353 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:02.353 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.353 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.611 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.611 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.611 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.611 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.178 22:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:03.436 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.436 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:03.436 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.436 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:03.695 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.695 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:03.954 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:03.954 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:04.213 22:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:04.473 22:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:05.848 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.106 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.106 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:06.106 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.106 22:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:06.364 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.364 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:06.364 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:06.364 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.622 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.622 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:06.622 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.622 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:06.880 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.880 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:06.880 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.880 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:07.139 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.139 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:07.139 22:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:07.704 22:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:07.704 22:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:08.705 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:08.705 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:08.705 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.705 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:08.964 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:08.964 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:08.964 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.964 22:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:09.222 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.222 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.480 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:09.738 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.738 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:09.738 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.738 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:09.996 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.996 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:09.996 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.996 22:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:10.254 22:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:10.254 22:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:10.254 22:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:10.511 22:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:10.769 22:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:12.144 22:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.402 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.402 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:12.402 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.402 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:12.660 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.660 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:12.660 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.660 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:12.919 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.919 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:12.919 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.919 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.178 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.178 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:13.178 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.178 22:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:13.437 22:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.437 22:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:13.437 22:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:14.002 22:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:14.002 22:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:15.375 22:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:15.375 22:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:15.375 22:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:15.375 22:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.375 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.375 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:15.375 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:15.375 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.632 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.632 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:15.632 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.632 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:15.888 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.888 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:15.888 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.888 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:16.145 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.145 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:16.145 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:16.145 22:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77107 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77107 ']' 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77107 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.715 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77107 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:16.981 killing process with pid 77107 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77107' 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77107 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77107 00:17:16.981 Connection closed with partial response: 00:17:16.981 00:17:16.981 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77107 00:17:16.981 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:16.981 [2024-07-15 22:42:57.906195] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:16.981 [2024-07-15 22:42:57.906361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77107 ] 00:17:16.981 [2024-07-15 22:42:58.042633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.981 [2024-07-15 22:42:58.155591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.981 [2024-07-15 22:42:58.210193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:16.981 Running I/O for 90 seconds... 00:17:16.981 [2024-07-15 22:43:15.178284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.981 [2024-07-15 22:43:15.178605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:16.981 [2024-07-15 22:43:15.178626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.178640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.178676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.178969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.178982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.179425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.179979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.179993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.180759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.180963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.982 [2024-07-15 22:43:15.181771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.181976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.982 [2024-07-15 22:43:15.181991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:16.982 [2024-07-15 22:43:15.182016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.182867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.182982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.182996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:15.183516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:15.183804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:15.183817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.798778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.798859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.798931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.798952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.983 [2024-07-15 22:43:31.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:16.983 [2024-07-15 22:43:31.800760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.983 [2024-07-15 22:43:31.800778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:16.983 Received shutdown signal, test time was about 34.418100 seconds 00:17:16.983 00:17:16.983 Latency(us) 00:17:16.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.983 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.983 Verification LBA range: start 0x0 length 0x4000 00:17:16.983 Nvme0n1 : 34.42 7963.47 31.11 0.00 0.00 16041.29 135.91 4026531.84 00:17:16.983 =================================================================================================================== 00:17:16.983 Total : 7963.47 31.11 0.00 0.00 16041.29 135.91 4026531.84 00:17:16.983 22:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.549 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.549 rmmod nvme_tcp 00:17:17.549 rmmod nvme_fabrics 00:17:17.549 rmmod nvme_keyring 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 77046 ']' 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 77046 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77046 ']' 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77046 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77046 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.550 killing process with pid 77046 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77046' 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77046 00:17:17.550 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77046 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.807 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.808 00:17:17.808 real 0m40.806s 00:17:17.808 user 2m11.992s 00:17:17.808 sys 0m12.289s 00:17:17.808 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.808 22:43:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:17.808 ************************************ 00:17:17.808 END TEST nvmf_host_multipath_status 00:17:17.808 ************************************ 00:17:17.808 22:43:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.808 22:43:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:17.808 22:43:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.808 22:43:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.808 22:43:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.808 ************************************ 00:17:17.808 START TEST nvmf_discovery_remove_ifc 00:17:17.808 ************************************ 00:17:17.808 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:17.808 * Looking for test storage... 00:17:17.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:17.808 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.808 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.067 Cannot find device "nvmf_tgt_br" 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.067 Cannot find device "nvmf_tgt_br2" 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:18.067 Cannot find device "nvmf_tgt_br" 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:18.067 Cannot find device "nvmf_tgt_br2" 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:18.067 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:18.326 00:17:18.326 --- 10.0.0.2 ping statistics --- 00:17:18.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.326 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:17:18.326 00:17:18.326 --- 10.0.0.3 ping statistics --- 00:17:18.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.326 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:18.326 22:43:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:18.326 00:17:18.326 --- 10.0.0.1 ping statistics --- 00:17:18.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.326 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77892 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77892 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77892 ']' 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.326 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.327 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.327 22:43:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.327 [2024-07-15 22:43:36.099617] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:18.327 [2024-07-15 22:43:36.099710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.585 [2024-07-15 22:43:36.238814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.585 [2024-07-15 22:43:36.342711] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.585 [2024-07-15 22:43:36.342766] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.585 [2024-07-15 22:43:36.342778] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.585 [2024-07-15 22:43:36.342786] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.585 [2024-07-15 22:43:36.342793] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.585 [2024-07-15 22:43:36.342817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.585 [2024-07-15 22:43:36.398851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 [2024-07-15 22:43:37.139003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.577 [2024-07-15 22:43:37.147186] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:19.577 null0 00:17:19.577 [2024-07-15 22:43:37.179052] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77923 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77923 /tmp/host.sock 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77923 ']' 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.577 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.577 22:43:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 [2024-07-15 22:43:37.251686] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:19.577 [2024-07-15 22:43:37.251768] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77923 ] 00:17:19.577 [2024-07-15 22:43:37.389960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.836 [2024-07-15 22:43:37.510050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.402 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.661 [2024-07-15 22:43:38.298140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:20.661 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.661 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:20.661 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.661 22:43:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 [2024-07-15 22:43:39.362405] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:21.595 [2024-07-15 22:43:39.362473] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:21.595 [2024-07-15 22:43:39.362497] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:21.595 [2024-07-15 22:43:39.368506] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:21.595 [2024-07-15 22:43:39.426817] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:21.595 [2024-07-15 22:43:39.426949] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:21.595 [2024-07-15 22:43:39.426983] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:21.595 [2024-07-15 22:43:39.427033] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:21.595 [2024-07-15 22:43:39.427068] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:21.595 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.595 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.854 [2024-07-15 22:43:39.430956] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9c70b0 was disconnected and freed. delete nvme_qpair. 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:21.854 22:43:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.842 22:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:24.216 22:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:25.151 22:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:26.086 22:43:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:27.022 22:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.280 [2024-07-15 22:43:44.863829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:27.280 [2024-07-15 22:43:44.863959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.280 [2024-07-15 22:43:44.863989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.280 [2024-07-15 22:43:44.864005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.280 [2024-07-15 22:43:44.864015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.280 [2024-07-15 22:43:44.864025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.280 [2024-07-15 22:43:44.864036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.280 [2024-07-15 22:43:44.864047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.280 [2024-07-15 22:43:44.864056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.280 [2024-07-15 22:43:44.864066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.280 [2024-07-15 22:43:44.864075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.280 [2024-07-15 22:43:44.864085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cc60 is same with the state(5) to be set 00:17:27.280 [2024-07-15 22:43:44.873828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92cc60 (9): Bad file descriptor 00:17:27.281 [2024-07-15 22:43:44.883884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:28.216 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:28.216 [2024-07-15 22:43:45.932962] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:28.216 [2024-07-15 22:43:45.933104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x92cc60 with addr=10.0.0.2, port=4420 00:17:28.216 [2024-07-15 22:43:45.933136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cc60 is same with the state(5) to be set 00:17:28.216 [2024-07-15 22:43:45.933210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92cc60 (9): Bad file descriptor 00:17:28.216 [2024-07-15 22:43:45.933955] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:28.216 [2024-07-15 22:43:45.933996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:28.216 [2024-07-15 22:43:45.934014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:28.217 [2024-07-15 22:43:45.934033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:28.217 [2024-07-15 22:43:45.934071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:28.217 [2024-07-15 22:43:45.934089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:28.217 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.217 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:28.217 22:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:29.151 [2024-07-15 22:43:46.934166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:29.151 [2024-07-15 22:43:46.934284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:29.151 [2024-07-15 22:43:46.934298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:29.151 [2024-07-15 22:43:46.934310] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:29.151 [2024-07-15 22:43:46.934341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.151 [2024-07-15 22:43:46.934379] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:29.151 [2024-07-15 22:43:46.934463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.151 [2024-07-15 22:43:46.934481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.151 [2024-07-15 22:43:46.934497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.151 [2024-07-15 22:43:46.934507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.151 [2024-07-15 22:43:46.934518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.151 [2024-07-15 22:43:46.934527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.151 [2024-07-15 22:43:46.934538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.152 [2024-07-15 22:43:46.934548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.152 [2024-07-15 22:43:46.934559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.152 [2024-07-15 22:43:46.934568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.152 [2024-07-15 22:43:46.934578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:29.152 [2024-07-15 22:43:46.934623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930a00 (9): Bad file descriptor 00:17:29.152 [2024-07-15 22:43:46.935612] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:29.152 [2024-07-15 22:43:46.935630] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.152 22:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:29.411 22:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:30.348 22:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:31.291 [2024-07-15 22:43:48.946989] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:31.291 [2024-07-15 22:43:48.947060] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:31.291 [2024-07-15 22:43:48.947082] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:31.291 [2024-07-15 22:43:48.953049] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:31.291 [2024-07-15 22:43:49.010068] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:31.291 [2024-07-15 22:43:49.010143] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:31.291 [2024-07-15 22:43:49.010171] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:31.291 [2024-07-15 22:43:49.010190] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:31.291 [2024-07-15 22:43:49.010214] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:31.291 [2024-07-15 22:43:49.015580] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9aeb80 was disconnected and freed. delete nvme_qpair. 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.549 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77923 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77923 ']' 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77923 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77923 00:17:31.550 killing process with pid 77923 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77923' 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77923 00:17:31.550 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77923 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.808 rmmod nvme_tcp 00:17:31.808 rmmod nvme_fabrics 00:17:31.808 rmmod nvme_keyring 00:17:31.808 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77892 ']' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77892 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77892 ']' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77892 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77892 00:17:32.066 killing process with pid 77892 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77892' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77892 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77892 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.066 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.325 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:32.325 00:17:32.325 real 0m14.375s 00:17:32.325 user 0m24.914s 00:17:32.325 sys 0m2.450s 00:17:32.325 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.325 ************************************ 00:17:32.325 END TEST nvmf_discovery_remove_ifc 00:17:32.325 ************************************ 00:17:32.325 22:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.325 22:43:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:32.325 22:43:49 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:32.325 22:43:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:32.325 22:43:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.325 22:43:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.325 ************************************ 00:17:32.325 START TEST nvmf_identify_kernel_target 00:17:32.325 ************************************ 00:17:32.325 22:43:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:32.325 * Looking for test storage... 00:17:32.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:32.325 Cannot find device "nvmf_tgt_br" 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.325 Cannot find device "nvmf_tgt_br2" 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:32.325 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:32.584 Cannot find device "nvmf_tgt_br" 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:32.584 Cannot find device "nvmf_tgt_br2" 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:32.584 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:32.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:17:32.843 00:17:32.843 --- 10.0.0.2 ping statistics --- 00:17:32.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.843 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:32.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:32.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:32.843 00:17:32.843 --- 10.0.0.3 ping statistics --- 00:17:32.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.843 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:32.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:32.843 00:17:32.843 --- 10.0.0.1 ping statistics --- 00:17:32.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.843 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:32.843 22:43:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:33.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:33.102 Waiting for block devices as requested 00:17:33.102 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.359 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:33.359 No valid GPT data, bailing 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:33.359 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:33.615 No valid GPT data, bailing 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:33.615 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:33.616 No valid GPT data, bailing 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:33.616 No valid GPT data, bailing 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -a 10.0.0.1 -t tcp -s 4420 00:17:33.616 00:17:33.616 Discovery Log Number of Records 2, Generation counter 2 00:17:33.616 =====Discovery Log Entry 0====== 00:17:33.616 trtype: tcp 00:17:33.616 adrfam: ipv4 00:17:33.616 subtype: current discovery subsystem 00:17:33.616 treq: not specified, sq flow control disable supported 00:17:33.616 portid: 1 00:17:33.616 trsvcid: 4420 00:17:33.616 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:33.616 traddr: 10.0.0.1 00:17:33.616 eflags: none 00:17:33.616 sectype: none 00:17:33.616 =====Discovery Log Entry 1====== 00:17:33.616 trtype: tcp 00:17:33.616 adrfam: ipv4 00:17:33.616 subtype: nvme subsystem 00:17:33.616 treq: not specified, sq flow control disable supported 00:17:33.616 portid: 1 00:17:33.616 trsvcid: 4420 00:17:33.616 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:33.616 traddr: 10.0.0.1 00:17:33.616 eflags: none 00:17:33.616 sectype: none 00:17:33.616 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:33.616 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:33.873 ===================================================== 00:17:33.873 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:33.873 ===================================================== 00:17:33.873 Controller Capabilities/Features 00:17:33.873 ================================ 00:17:33.873 Vendor ID: 0000 00:17:33.873 Subsystem Vendor ID: 0000 00:17:33.873 Serial Number: 1bdcc8f3c7c1d0c9d51b 00:17:33.873 Model Number: Linux 00:17:33.873 Firmware Version: 6.7.0-68 00:17:33.873 Recommended Arb Burst: 0 00:17:33.873 IEEE OUI Identifier: 00 00 00 00:17:33.873 Multi-path I/O 00:17:33.873 May have multiple subsystem ports: No 00:17:33.873 May have multiple controllers: No 00:17:33.873 Associated with SR-IOV VF: No 00:17:33.873 Max Data Transfer Size: Unlimited 00:17:33.873 Max Number of Namespaces: 0 00:17:33.873 Max Number of I/O Queues: 1024 00:17:33.873 NVMe Specification Version (VS): 1.3 00:17:33.873 NVMe Specification Version (Identify): 1.3 00:17:33.873 Maximum Queue Entries: 1024 00:17:33.873 Contiguous Queues Required: No 00:17:33.873 Arbitration Mechanisms Supported 00:17:33.873 Weighted Round Robin: Not Supported 00:17:33.873 Vendor Specific: Not Supported 00:17:33.873 Reset Timeout: 7500 ms 00:17:33.873 Doorbell Stride: 4 bytes 00:17:33.873 NVM Subsystem Reset: Not Supported 00:17:33.873 Command Sets Supported 00:17:33.873 NVM Command Set: Supported 00:17:33.873 Boot Partition: Not Supported 00:17:33.873 Memory Page Size Minimum: 4096 bytes 00:17:33.874 Memory Page Size Maximum: 4096 bytes 00:17:33.874 Persistent Memory Region: Not Supported 00:17:33.874 Optional Asynchronous Events Supported 00:17:33.874 Namespace Attribute Notices: Not Supported 00:17:33.874 Firmware Activation Notices: Not Supported 00:17:33.874 ANA Change Notices: Not Supported 00:17:33.874 PLE Aggregate Log Change Notices: Not Supported 00:17:33.874 LBA Status Info Alert Notices: Not Supported 00:17:33.874 EGE Aggregate Log Change Notices: Not Supported 00:17:33.874 Normal NVM Subsystem Shutdown event: Not Supported 00:17:33.874 Zone Descriptor Change Notices: Not Supported 00:17:33.874 Discovery Log Change Notices: Supported 00:17:33.874 Controller Attributes 00:17:33.874 128-bit Host Identifier: Not Supported 00:17:33.874 Non-Operational Permissive Mode: Not Supported 00:17:33.874 NVM Sets: Not Supported 00:17:33.874 Read Recovery Levels: Not Supported 00:17:33.874 Endurance Groups: Not Supported 00:17:33.874 Predictable Latency Mode: Not Supported 00:17:33.874 Traffic Based Keep ALive: Not Supported 00:17:33.874 Namespace Granularity: Not Supported 00:17:33.874 SQ Associations: Not Supported 00:17:33.874 UUID List: Not Supported 00:17:33.874 Multi-Domain Subsystem: Not Supported 00:17:33.874 Fixed Capacity Management: Not Supported 00:17:33.874 Variable Capacity Management: Not Supported 00:17:33.874 Delete Endurance Group: Not Supported 00:17:33.874 Delete NVM Set: Not Supported 00:17:33.874 Extended LBA Formats Supported: Not Supported 00:17:33.874 Flexible Data Placement Supported: Not Supported 00:17:33.874 00:17:33.874 Controller Memory Buffer Support 00:17:33.874 ================================ 00:17:33.874 Supported: No 00:17:33.874 00:17:33.874 Persistent Memory Region Support 00:17:33.874 ================================ 00:17:33.874 Supported: No 00:17:33.874 00:17:33.874 Admin Command Set Attributes 00:17:33.874 ============================ 00:17:33.874 Security Send/Receive: Not Supported 00:17:33.874 Format NVM: Not Supported 00:17:33.874 Firmware Activate/Download: Not Supported 00:17:33.874 Namespace Management: Not Supported 00:17:33.874 Device Self-Test: Not Supported 00:17:33.874 Directives: Not Supported 00:17:33.874 NVMe-MI: Not Supported 00:17:33.874 Virtualization Management: Not Supported 00:17:33.874 Doorbell Buffer Config: Not Supported 00:17:33.874 Get LBA Status Capability: Not Supported 00:17:33.874 Command & Feature Lockdown Capability: Not Supported 00:17:33.874 Abort Command Limit: 1 00:17:33.874 Async Event Request Limit: 1 00:17:33.874 Number of Firmware Slots: N/A 00:17:33.874 Firmware Slot 1 Read-Only: N/A 00:17:33.874 Firmware Activation Without Reset: N/A 00:17:33.874 Multiple Update Detection Support: N/A 00:17:33.874 Firmware Update Granularity: No Information Provided 00:17:33.874 Per-Namespace SMART Log: No 00:17:33.874 Asymmetric Namespace Access Log Page: Not Supported 00:17:33.874 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:33.874 Command Effects Log Page: Not Supported 00:17:33.874 Get Log Page Extended Data: Supported 00:17:33.874 Telemetry Log Pages: Not Supported 00:17:33.874 Persistent Event Log Pages: Not Supported 00:17:33.874 Supported Log Pages Log Page: May Support 00:17:33.874 Commands Supported & Effects Log Page: Not Supported 00:17:33.874 Feature Identifiers & Effects Log Page:May Support 00:17:33.874 NVMe-MI Commands & Effects Log Page: May Support 00:17:33.874 Data Area 4 for Telemetry Log: Not Supported 00:17:33.874 Error Log Page Entries Supported: 1 00:17:33.874 Keep Alive: Not Supported 00:17:33.874 00:17:33.874 NVM Command Set Attributes 00:17:33.874 ========================== 00:17:33.874 Submission Queue Entry Size 00:17:33.874 Max: 1 00:17:33.874 Min: 1 00:17:33.874 Completion Queue Entry Size 00:17:33.874 Max: 1 00:17:33.874 Min: 1 00:17:33.874 Number of Namespaces: 0 00:17:33.874 Compare Command: Not Supported 00:17:33.874 Write Uncorrectable Command: Not Supported 00:17:33.874 Dataset Management Command: Not Supported 00:17:33.874 Write Zeroes Command: Not Supported 00:17:33.874 Set Features Save Field: Not Supported 00:17:33.874 Reservations: Not Supported 00:17:33.874 Timestamp: Not Supported 00:17:33.874 Copy: Not Supported 00:17:33.874 Volatile Write Cache: Not Present 00:17:33.874 Atomic Write Unit (Normal): 1 00:17:33.874 Atomic Write Unit (PFail): 1 00:17:33.874 Atomic Compare & Write Unit: 1 00:17:33.874 Fused Compare & Write: Not Supported 00:17:33.874 Scatter-Gather List 00:17:33.874 SGL Command Set: Supported 00:17:33.874 SGL Keyed: Not Supported 00:17:33.874 SGL Bit Bucket Descriptor: Not Supported 00:17:33.874 SGL Metadata Pointer: Not Supported 00:17:33.874 Oversized SGL: Not Supported 00:17:33.874 SGL Metadata Address: Not Supported 00:17:33.874 SGL Offset: Supported 00:17:33.874 Transport SGL Data Block: Not Supported 00:17:33.874 Replay Protected Memory Block: Not Supported 00:17:33.874 00:17:33.874 Firmware Slot Information 00:17:33.874 ========================= 00:17:33.874 Active slot: 0 00:17:33.874 00:17:33.874 00:17:33.874 Error Log 00:17:33.874 ========= 00:17:33.874 00:17:33.874 Active Namespaces 00:17:33.874 ================= 00:17:33.874 Discovery Log Page 00:17:33.874 ================== 00:17:33.874 Generation Counter: 2 00:17:33.874 Number of Records: 2 00:17:33.874 Record Format: 0 00:17:33.874 00:17:33.874 Discovery Log Entry 0 00:17:33.874 ---------------------- 00:17:33.874 Transport Type: 3 (TCP) 00:17:33.874 Address Family: 1 (IPv4) 00:17:33.874 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:33.874 Entry Flags: 00:17:33.874 Duplicate Returned Information: 0 00:17:33.874 Explicit Persistent Connection Support for Discovery: 0 00:17:33.874 Transport Requirements: 00:17:33.874 Secure Channel: Not Specified 00:17:33.874 Port ID: 1 (0x0001) 00:17:33.874 Controller ID: 65535 (0xffff) 00:17:33.874 Admin Max SQ Size: 32 00:17:33.874 Transport Service Identifier: 4420 00:17:33.874 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:33.874 Transport Address: 10.0.0.1 00:17:33.874 Discovery Log Entry 1 00:17:33.874 ---------------------- 00:17:33.874 Transport Type: 3 (TCP) 00:17:33.874 Address Family: 1 (IPv4) 00:17:33.874 Subsystem Type: 2 (NVM Subsystem) 00:17:33.874 Entry Flags: 00:17:33.874 Duplicate Returned Information: 0 00:17:33.874 Explicit Persistent Connection Support for Discovery: 0 00:17:33.874 Transport Requirements: 00:17:33.874 Secure Channel: Not Specified 00:17:33.874 Port ID: 1 (0x0001) 00:17:33.874 Controller ID: 65535 (0xffff) 00:17:33.874 Admin Max SQ Size: 32 00:17:33.874 Transport Service Identifier: 4420 00:17:33.874 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:33.874 Transport Address: 10.0.0.1 00:17:33.874 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:34.133 get_feature(0x01) failed 00:17:34.133 get_feature(0x02) failed 00:17:34.133 get_feature(0x04) failed 00:17:34.133 ===================================================== 00:17:34.133 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:34.133 ===================================================== 00:17:34.133 Controller Capabilities/Features 00:17:34.133 ================================ 00:17:34.133 Vendor ID: 0000 00:17:34.133 Subsystem Vendor ID: 0000 00:17:34.133 Serial Number: 4f9a0ff855e3416000ea 00:17:34.133 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:34.133 Firmware Version: 6.7.0-68 00:17:34.133 Recommended Arb Burst: 6 00:17:34.133 IEEE OUI Identifier: 00 00 00 00:17:34.133 Multi-path I/O 00:17:34.133 May have multiple subsystem ports: Yes 00:17:34.133 May have multiple controllers: Yes 00:17:34.133 Associated with SR-IOV VF: No 00:17:34.133 Max Data Transfer Size: Unlimited 00:17:34.134 Max Number of Namespaces: 1024 00:17:34.134 Max Number of I/O Queues: 128 00:17:34.134 NVMe Specification Version (VS): 1.3 00:17:34.134 NVMe Specification Version (Identify): 1.3 00:17:34.134 Maximum Queue Entries: 1024 00:17:34.134 Contiguous Queues Required: No 00:17:34.134 Arbitration Mechanisms Supported 00:17:34.134 Weighted Round Robin: Not Supported 00:17:34.134 Vendor Specific: Not Supported 00:17:34.134 Reset Timeout: 7500 ms 00:17:34.134 Doorbell Stride: 4 bytes 00:17:34.134 NVM Subsystem Reset: Not Supported 00:17:34.134 Command Sets Supported 00:17:34.134 NVM Command Set: Supported 00:17:34.134 Boot Partition: Not Supported 00:17:34.134 Memory Page Size Minimum: 4096 bytes 00:17:34.134 Memory Page Size Maximum: 4096 bytes 00:17:34.134 Persistent Memory Region: Not Supported 00:17:34.134 Optional Asynchronous Events Supported 00:17:34.134 Namespace Attribute Notices: Supported 00:17:34.134 Firmware Activation Notices: Not Supported 00:17:34.134 ANA Change Notices: Supported 00:17:34.134 PLE Aggregate Log Change Notices: Not Supported 00:17:34.134 LBA Status Info Alert Notices: Not Supported 00:17:34.134 EGE Aggregate Log Change Notices: Not Supported 00:17:34.134 Normal NVM Subsystem Shutdown event: Not Supported 00:17:34.134 Zone Descriptor Change Notices: Not Supported 00:17:34.134 Discovery Log Change Notices: Not Supported 00:17:34.134 Controller Attributes 00:17:34.134 128-bit Host Identifier: Supported 00:17:34.134 Non-Operational Permissive Mode: Not Supported 00:17:34.134 NVM Sets: Not Supported 00:17:34.134 Read Recovery Levels: Not Supported 00:17:34.134 Endurance Groups: Not Supported 00:17:34.134 Predictable Latency Mode: Not Supported 00:17:34.134 Traffic Based Keep ALive: Supported 00:17:34.134 Namespace Granularity: Not Supported 00:17:34.134 SQ Associations: Not Supported 00:17:34.134 UUID List: Not Supported 00:17:34.134 Multi-Domain Subsystem: Not Supported 00:17:34.134 Fixed Capacity Management: Not Supported 00:17:34.134 Variable Capacity Management: Not Supported 00:17:34.134 Delete Endurance Group: Not Supported 00:17:34.134 Delete NVM Set: Not Supported 00:17:34.134 Extended LBA Formats Supported: Not Supported 00:17:34.134 Flexible Data Placement Supported: Not Supported 00:17:34.134 00:17:34.134 Controller Memory Buffer Support 00:17:34.134 ================================ 00:17:34.134 Supported: No 00:17:34.134 00:17:34.134 Persistent Memory Region Support 00:17:34.134 ================================ 00:17:34.134 Supported: No 00:17:34.134 00:17:34.134 Admin Command Set Attributes 00:17:34.134 ============================ 00:17:34.134 Security Send/Receive: Not Supported 00:17:34.134 Format NVM: Not Supported 00:17:34.134 Firmware Activate/Download: Not Supported 00:17:34.134 Namespace Management: Not Supported 00:17:34.134 Device Self-Test: Not Supported 00:17:34.134 Directives: Not Supported 00:17:34.134 NVMe-MI: Not Supported 00:17:34.134 Virtualization Management: Not Supported 00:17:34.134 Doorbell Buffer Config: Not Supported 00:17:34.134 Get LBA Status Capability: Not Supported 00:17:34.134 Command & Feature Lockdown Capability: Not Supported 00:17:34.134 Abort Command Limit: 4 00:17:34.134 Async Event Request Limit: 4 00:17:34.134 Number of Firmware Slots: N/A 00:17:34.134 Firmware Slot 1 Read-Only: N/A 00:17:34.134 Firmware Activation Without Reset: N/A 00:17:34.134 Multiple Update Detection Support: N/A 00:17:34.134 Firmware Update Granularity: No Information Provided 00:17:34.134 Per-Namespace SMART Log: Yes 00:17:34.134 Asymmetric Namespace Access Log Page: Supported 00:17:34.134 ANA Transition Time : 10 sec 00:17:34.134 00:17:34.134 Asymmetric Namespace Access Capabilities 00:17:34.134 ANA Optimized State : Supported 00:17:34.134 ANA Non-Optimized State : Supported 00:17:34.134 ANA Inaccessible State : Supported 00:17:34.134 ANA Persistent Loss State : Supported 00:17:34.134 ANA Change State : Supported 00:17:34.134 ANAGRPID is not changed : No 00:17:34.134 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:34.134 00:17:34.134 ANA Group Identifier Maximum : 128 00:17:34.134 Number of ANA Group Identifiers : 128 00:17:34.134 Max Number of Allowed Namespaces : 1024 00:17:34.134 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:34.134 Command Effects Log Page: Supported 00:17:34.134 Get Log Page Extended Data: Supported 00:17:34.134 Telemetry Log Pages: Not Supported 00:17:34.134 Persistent Event Log Pages: Not Supported 00:17:34.134 Supported Log Pages Log Page: May Support 00:17:34.134 Commands Supported & Effects Log Page: Not Supported 00:17:34.134 Feature Identifiers & Effects Log Page:May Support 00:17:34.134 NVMe-MI Commands & Effects Log Page: May Support 00:17:34.134 Data Area 4 for Telemetry Log: Not Supported 00:17:34.134 Error Log Page Entries Supported: 128 00:17:34.134 Keep Alive: Supported 00:17:34.134 Keep Alive Granularity: 1000 ms 00:17:34.134 00:17:34.134 NVM Command Set Attributes 00:17:34.134 ========================== 00:17:34.134 Submission Queue Entry Size 00:17:34.134 Max: 64 00:17:34.134 Min: 64 00:17:34.134 Completion Queue Entry Size 00:17:34.134 Max: 16 00:17:34.134 Min: 16 00:17:34.134 Number of Namespaces: 1024 00:17:34.134 Compare Command: Not Supported 00:17:34.134 Write Uncorrectable Command: Not Supported 00:17:34.134 Dataset Management Command: Supported 00:17:34.134 Write Zeroes Command: Supported 00:17:34.134 Set Features Save Field: Not Supported 00:17:34.134 Reservations: Not Supported 00:17:34.134 Timestamp: Not Supported 00:17:34.134 Copy: Not Supported 00:17:34.134 Volatile Write Cache: Present 00:17:34.134 Atomic Write Unit (Normal): 1 00:17:34.134 Atomic Write Unit (PFail): 1 00:17:34.134 Atomic Compare & Write Unit: 1 00:17:34.134 Fused Compare & Write: Not Supported 00:17:34.134 Scatter-Gather List 00:17:34.134 SGL Command Set: Supported 00:17:34.134 SGL Keyed: Not Supported 00:17:34.134 SGL Bit Bucket Descriptor: Not Supported 00:17:34.134 SGL Metadata Pointer: Not Supported 00:17:34.134 Oversized SGL: Not Supported 00:17:34.134 SGL Metadata Address: Not Supported 00:17:34.134 SGL Offset: Supported 00:17:34.134 Transport SGL Data Block: Not Supported 00:17:34.134 Replay Protected Memory Block: Not Supported 00:17:34.134 00:17:34.134 Firmware Slot Information 00:17:34.134 ========================= 00:17:34.134 Active slot: 0 00:17:34.134 00:17:34.134 Asymmetric Namespace Access 00:17:34.134 =========================== 00:17:34.134 Change Count : 0 00:17:34.134 Number of ANA Group Descriptors : 1 00:17:34.134 ANA Group Descriptor : 0 00:17:34.134 ANA Group ID : 1 00:17:34.134 Number of NSID Values : 1 00:17:34.134 Change Count : 0 00:17:34.134 ANA State : 1 00:17:34.134 Namespace Identifier : 1 00:17:34.134 00:17:34.134 Commands Supported and Effects 00:17:34.134 ============================== 00:17:34.134 Admin Commands 00:17:34.134 -------------- 00:17:34.134 Get Log Page (02h): Supported 00:17:34.134 Identify (06h): Supported 00:17:34.134 Abort (08h): Supported 00:17:34.134 Set Features (09h): Supported 00:17:34.134 Get Features (0Ah): Supported 00:17:34.134 Asynchronous Event Request (0Ch): Supported 00:17:34.134 Keep Alive (18h): Supported 00:17:34.134 I/O Commands 00:17:34.134 ------------ 00:17:34.134 Flush (00h): Supported 00:17:34.134 Write (01h): Supported LBA-Change 00:17:34.134 Read (02h): Supported 00:17:34.134 Write Zeroes (08h): Supported LBA-Change 00:17:34.134 Dataset Management (09h): Supported 00:17:34.134 00:17:34.134 Error Log 00:17:34.134 ========= 00:17:34.134 Entry: 0 00:17:34.134 Error Count: 0x3 00:17:34.134 Submission Queue Id: 0x0 00:17:34.134 Command Id: 0x5 00:17:34.134 Phase Bit: 0 00:17:34.134 Status Code: 0x2 00:17:34.134 Status Code Type: 0x0 00:17:34.134 Do Not Retry: 1 00:17:34.134 Error Location: 0x28 00:17:34.134 LBA: 0x0 00:17:34.134 Namespace: 0x0 00:17:34.134 Vendor Log Page: 0x0 00:17:34.134 ----------- 00:17:34.134 Entry: 1 00:17:34.134 Error Count: 0x2 00:17:34.134 Submission Queue Id: 0x0 00:17:34.134 Command Id: 0x5 00:17:34.134 Phase Bit: 0 00:17:34.134 Status Code: 0x2 00:17:34.134 Status Code Type: 0x0 00:17:34.134 Do Not Retry: 1 00:17:34.134 Error Location: 0x28 00:17:34.134 LBA: 0x0 00:17:34.134 Namespace: 0x0 00:17:34.134 Vendor Log Page: 0x0 00:17:34.134 ----------- 00:17:34.134 Entry: 2 00:17:34.134 Error Count: 0x1 00:17:34.134 Submission Queue Id: 0x0 00:17:34.134 Command Id: 0x4 00:17:34.134 Phase Bit: 0 00:17:34.134 Status Code: 0x2 00:17:34.134 Status Code Type: 0x0 00:17:34.134 Do Not Retry: 1 00:17:34.134 Error Location: 0x28 00:17:34.134 LBA: 0x0 00:17:34.134 Namespace: 0x0 00:17:34.134 Vendor Log Page: 0x0 00:17:34.134 00:17:34.134 Number of Queues 00:17:34.134 ================ 00:17:34.134 Number of I/O Submission Queues: 128 00:17:34.134 Number of I/O Completion Queues: 128 00:17:34.134 00:17:34.134 ZNS Specific Controller Data 00:17:34.134 ============================ 00:17:34.134 Zone Append Size Limit: 0 00:17:34.134 00:17:34.134 00:17:34.134 Active Namespaces 00:17:34.134 ================= 00:17:34.134 get_feature(0x05) failed 00:17:34.134 Namespace ID:1 00:17:34.134 Command Set Identifier: NVM (00h) 00:17:34.134 Deallocate: Supported 00:17:34.134 Deallocated/Unwritten Error: Not Supported 00:17:34.134 Deallocated Read Value: Unknown 00:17:34.134 Deallocate in Write Zeroes: Not Supported 00:17:34.134 Deallocated Guard Field: 0xFFFF 00:17:34.134 Flush: Supported 00:17:34.134 Reservation: Not Supported 00:17:34.134 Namespace Sharing Capabilities: Multiple Controllers 00:17:34.134 Size (in LBAs): 1310720 (5GiB) 00:17:34.134 Capacity (in LBAs): 1310720 (5GiB) 00:17:34.134 Utilization (in LBAs): 1310720 (5GiB) 00:17:34.134 UUID: 7a7b9bbe-0ca4-449a-8b9f-663fdd71285b 00:17:34.134 Thin Provisioning: Not Supported 00:17:34.134 Per-NS Atomic Units: Yes 00:17:34.134 Atomic Boundary Size (Normal): 0 00:17:34.134 Atomic Boundary Size (PFail): 0 00:17:34.134 Atomic Boundary Offset: 0 00:17:34.134 NGUID/EUI64 Never Reused: No 00:17:34.134 ANA group ID: 1 00:17:34.134 Namespace Write Protected: No 00:17:34.134 Number of LBA Formats: 1 00:17:34.134 Current LBA Format: LBA Format #00 00:17:34.134 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:34.134 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.134 rmmod nvme_tcp 00:17:34.134 rmmod nvme_fabrics 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:34.134 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:34.392 22:43:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.010 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.268 ************************************ 00:17:35.268 END TEST nvmf_identify_kernel_target 00:17:35.268 ************************************ 00:17:35.268 00:17:35.268 real 0m2.894s 00:17:35.268 user 0m0.974s 00:17:35.268 sys 0m1.406s 00:17:35.268 22:43:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.268 22:43:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 22:43:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.268 22:43:52 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:35.268 22:43:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.268 22:43:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.268 22:43:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 ************************************ 00:17:35.268 START TEST nvmf_auth_host 00:17:35.268 ************************************ 00:17:35.268 22:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:35.268 * Looking for test storage... 00:17:35.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.268 22:43:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.269 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.527 Cannot find device "nvmf_tgt_br" 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.527 Cannot find device "nvmf_tgt_br2" 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.527 Cannot find device "nvmf_tgt_br" 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.527 Cannot find device "nvmf_tgt_br2" 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.527 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.783 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.783 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:35.784 00:17:35.784 --- 10.0.0.2 ping statistics --- 00:17:35.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.784 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:17:35.784 00:17:35.784 --- 10.0.0.3 ping statistics --- 00:17:35.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.784 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:17:35.784 00:17:35.784 --- 10.0.0.1 ping statistics --- 00:17:35.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.784 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78806 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78806 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78806 ']' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.784 22:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.719 22:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.719 22:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:36.719 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.719 22:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.719 22:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4e82ca3e3f13528db6f20d086272e349 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oLr 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4e82ca3e3f13528db6f20d086272e349 0 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4e82ca3e3f13528db6f20d086272e349 0 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4e82ca3e3f13528db6f20d086272e349 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oLr 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oLr 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oLr 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4481f93085f46225f825746992c52ac6a10c8df29064eaf7f6d39ed407c5f714 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.n8g 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4481f93085f46225f825746992c52ac6a10c8df29064eaf7f6d39ed407c5f714 3 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4481f93085f46225f825746992c52ac6a10c8df29064eaf7f6d39ed407c5f714 3 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4481f93085f46225f825746992c52ac6a10c8df29064eaf7f6d39ed407c5f714 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.n8g 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.n8g 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.n8g 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=50ff7d0ad7d128e28397c89fb22ef42c3972b8aa7ae372ef 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4fr 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 50ff7d0ad7d128e28397c89fb22ef42c3972b8aa7ae372ef 0 00:17:36.978 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 50ff7d0ad7d128e28397c89fb22ef42c3972b8aa7ae372ef 0 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=50ff7d0ad7d128e28397c89fb22ef42c3972b8aa7ae372ef 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4fr 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4fr 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4fr 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=735fd0a59072966b5d0cd123fa5bfbb1cad873c645fda641 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3Nx 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 735fd0a59072966b5d0cd123fa5bfbb1cad873c645fda641 2 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 735fd0a59072966b5d0cd123fa5bfbb1cad873c645fda641 2 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=735fd0a59072966b5d0cd123fa5bfbb1cad873c645fda641 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:36.979 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3Nx 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3Nx 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3Nx 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6095ddf603961c4fdbcd965727dea71c 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wdG 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6095ddf603961c4fdbcd965727dea71c 1 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6095ddf603961c4fdbcd965727dea71c 1 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6095ddf603961c4fdbcd965727dea71c 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wdG 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wdG 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wdG 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:37.238 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e0c82f7fbb7e2c6cdafd7a8af1c4a4bb 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aSI 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e0c82f7fbb7e2c6cdafd7a8af1c4a4bb 1 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e0c82f7fbb7e2c6cdafd7a8af1c4a4bb 1 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e0c82f7fbb7e2c6cdafd7a8af1c4a4bb 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aSI 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aSI 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.aSI 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59a4478d4ac751f5ec686991c627a82b71c5bdf03ecee505 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.M9U 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59a4478d4ac751f5ec686991c627a82b71c5bdf03ecee505 2 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59a4478d4ac751f5ec686991c627a82b71c5bdf03ecee505 2 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59a4478d4ac751f5ec686991c627a82b71c5bdf03ecee505 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:37.239 22:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.M9U 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.M9U 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.M9U 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58df469af66c0dec3eaf526a7010acb5 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Y64 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58df469af66c0dec3eaf526a7010acb5 0 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58df469af66c0dec3eaf526a7010acb5 0 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58df469af66c0dec3eaf526a7010acb5 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:37.239 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Y64 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Y64 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Y64 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c5bccd4301ecc06b30c55878325fac179f83482448a9e7dc154b743629123f6 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ah2 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c5bccd4301ecc06b30c55878325fac179f83482448a9e7dc154b743629123f6 3 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c5bccd4301ecc06b30c55878325fac179f83482448a9e7dc154b743629123f6 3 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.498 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c5bccd4301ecc06b30c55878325fac179f83482448a9e7dc154b743629123f6 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ah2 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ah2 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ah2 00:17:37.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78806 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78806 ']' 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.499 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.757 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.757 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:37.757 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.757 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oLr 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.n8g ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n8g 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4fr 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3Nx ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Nx 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wdG 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.aSI ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aSI 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.M9U 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Y64 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Y64 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ah2 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:37.758 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:38.016 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:38.016 22:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:38.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.275 Waiting for block devices as requested 00:17:38.275 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:39.122 No valid GPT data, bailing 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:39.122 No valid GPT data, bailing 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:39.122 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:39.380 No valid GPT data, bailing 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:39.380 22:43:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:39.380 22:43:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:39.380 No valid GPT data, bailing 00:17:39.380 22:43:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:39.380 22:43:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:39.380 22:43:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:39.380 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -a 10.0.0.1 -t tcp -s 4420 00:17:39.381 00:17:39.381 Discovery Log Number of Records 2, Generation counter 2 00:17:39.381 =====Discovery Log Entry 0====== 00:17:39.381 trtype: tcp 00:17:39.381 adrfam: ipv4 00:17:39.381 subtype: current discovery subsystem 00:17:39.381 treq: not specified, sq flow control disable supported 00:17:39.381 portid: 1 00:17:39.381 trsvcid: 4420 00:17:39.381 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.381 traddr: 10.0.0.1 00:17:39.381 eflags: none 00:17:39.381 sectype: none 00:17:39.381 =====Discovery Log Entry 1====== 00:17:39.381 trtype: tcp 00:17:39.381 adrfam: ipv4 00:17:39.381 subtype: nvme subsystem 00:17:39.381 treq: not specified, sq flow control disable supported 00:17:39.381 portid: 1 00:17:39.381 trsvcid: 4420 00:17:39.381 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:39.381 traddr: 10.0.0.1 00:17:39.381 eflags: none 00:17:39.381 sectype: none 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.381 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.643 nvme0n1 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.643 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.644 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.908 nvme0n1 00:17:39.908 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 nvme0n1 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 nvme0n1 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.168 22:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.428 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.429 nvme0n1 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.429 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.688 nvme0n1 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.688 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.947 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 nvme0n1 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.206 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.207 22:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.207 nvme0n1 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.207 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 nvme0n1 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 nvme0n1 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.985 nvme0n1 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:41.985 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.986 22:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.552 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.811 nvme0n1 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:42.811 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.812 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 nvme0n1 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.071 22:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 nvme0n1 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.400 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.401 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.659 nvme0n1 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:43.659 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.660 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.919 nvme0n1 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.919 22:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.833 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.090 nvme0n1 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.090 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.357 22:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.615 nvme0n1 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.615 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 nvme0n1 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.183 22:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.442 nvme0n1 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.442 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.443 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.701 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.960 nvme0n1 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.960 22:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 nvme0n1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.895 22:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.462 nvme0n1 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.462 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.029 nvme0n1 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.029 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.288 22:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.855 nvme0n1 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.855 22:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.421 nvme0n1 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.421 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 nvme0n1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.681 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.941 nvme0n1 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.941 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 nvme0n1 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 nvme0n1 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.201 22:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.201 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 nvme0n1 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.461 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.721 nvme0n1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.721 nvme0n1 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.721 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.980 nvme0n1 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.980 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.238 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 nvme0n1 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 22:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.239 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.498 nvme0n1 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:53.498 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.499 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 nvme0n1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.757 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.016 nvme0n1 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.016 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.275 22:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.275 nvme0n1 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.275 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.533 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.534 nvme0n1 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.534 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.791 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.791 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 nvme0n1 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.051 22:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.323 nvme0n1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.323 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 nvme0n1 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.920 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.178 nvme0n1 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.178 22:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.179 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.179 22:44:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:56.179 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.437 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.697 nvme0n1 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.697 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 nvme0n1 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.265 22:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.833 nvme0n1 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.833 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.834 22:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.401 nvme0n1 00:17:58.401 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.401 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.401 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.401 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.401 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.660 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.227 nvme0n1 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.227 22:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.227 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.228 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:59.228 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.228 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.164 nvme0n1 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:00.164 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.165 22:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 nvme0n1 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 nvme0n1 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.733 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 nvme0n1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.993 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.252 nvme0n1 00:18:01.252 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.252 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.253 22:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 nvme0n1 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 nvme0n1 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.512 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.771 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.772 nvme0n1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.772 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.031 nvme0n1 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.031 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.032 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.291 nvme0n1 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.291 22:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.291 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 nvme0n1 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.549 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.550 nvme0n1 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.550 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:18:02.808 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.809 nvme0n1 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.809 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.069 nvme0n1 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.069 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.340 22:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.340 nvme0n1 00:18:03.340 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.340 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.340 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.340 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.340 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.599 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.857 nvme0n1 00:18:03.857 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.857 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.858 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.116 nvme0n1 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.116 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.117 22:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.375 nvme0n1 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.375 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.634 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.635 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.893 nvme0n1 00:18:04.893 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.894 22:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.460 nvme0n1 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.460 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.717 nvme0n1 00:18:05.717 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.717 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.717 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.717 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.717 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:05.974 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.975 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.231 nvme0n1 00:18:06.231 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.231 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.231 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.231 22:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.231 22:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU4MmNhM2UzZjEzNTI4ZGI2ZjIwZDA4NjI3MmUzNDlFnfKK: 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: ]] 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDQ4MWY5MzA4NWY0NjIyNWY4MjU3NDY5OTJjNTJhYzZhMTBjOGRmMjkwNjRlYWY3ZjZkMzllZDQwN2M1ZjcxNBHODg0=: 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.231 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.489 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.057 nvme0n1 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.057 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.058 22:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.625 nvme0n1 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.625 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NWRkZjYwMzk2MWM0ZmRiY2Q5NjU3MjdkZWE3MWOVh/58: 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTBjODJmN2ZiYjdlMmM2Y2RhZmQ3YThhZjFjNGE0YmKF2yVy: 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.885 22:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 nvme0n1 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlhNDQ3OGQ0YWM3NTFmNWVjNjg2OTkxYzYyN2E4MmI3MWM1YmRmMDNlY2VlNTA1f9G3fg==: 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThkZjQ2OWFmNjZjMGRlYzNlYWY1MjZhNzAxMGFjYjUT3IGG: 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.454 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.021 nvme0n1 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.021 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.280 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.280 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.280 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1YmNjZDQzMDFlY2MwNmIzMGM1NTg3ODMyNWZhYzE3OWY4MzQ4MjQ0OGE5ZTdkYzE1NGI3NDM2MjkxMjNmNugF5Bc=: 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.281 22:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.848 nvme0n1 00:18:09.848 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.848 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBmZjdkMGFkN2QxMjhlMjgzOTdjODlmYjIyZWY0MmMzOTcyYjhhYTdhZTM3MmVmE3DRnQ==: 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM1ZmQwYTU5MDcyOTY2YjVkMGNkMTIzZmE1YmZiYjFjYWQ4NzNjNjQ1ZmRhNjQxrM53fA==: 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 request: 00:18:09.849 { 00:18:09.849 "name": "nvme0", 00:18:09.849 "trtype": "tcp", 00:18:09.849 "traddr": "10.0.0.1", 00:18:09.849 "adrfam": "ipv4", 00:18:09.849 "trsvcid": "4420", 00:18:09.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:09.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:09.849 "prchk_reftag": false, 00:18:09.849 "prchk_guard": false, 00:18:09.849 "hdgst": false, 00:18:09.849 "ddgst": false, 00:18:09.849 "method": "bdev_nvme_attach_controller", 00:18:09.849 "req_id": 1 00:18:09.849 } 00:18:09.849 Got JSON-RPC error response 00:18:09.849 response: 00:18:09.849 { 00:18:09.849 "code": -5, 00:18:09.849 "message": "Input/output error" 00:18:09.849 } 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:09.849 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.108 request: 00:18:10.108 { 00:18:10.108 "name": "nvme0", 00:18:10.108 "trtype": "tcp", 00:18:10.108 "traddr": "10.0.0.1", 00:18:10.108 "adrfam": "ipv4", 00:18:10.108 "trsvcid": "4420", 00:18:10.108 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:10.108 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:10.108 "prchk_reftag": false, 00:18:10.108 "prchk_guard": false, 00:18:10.108 "hdgst": false, 00:18:10.108 "ddgst": false, 00:18:10.108 "dhchap_key": "key2", 00:18:10.108 "method": "bdev_nvme_attach_controller", 00:18:10.108 "req_id": 1 00:18:10.108 } 00:18:10.108 Got JSON-RPC error response 00:18:10.108 response: 00:18:10.108 { 00:18:10.108 "code": -5, 00:18:10.108 "message": "Input/output error" 00:18:10.108 } 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.108 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.108 request: 00:18:10.108 { 00:18:10.108 "name": "nvme0", 00:18:10.108 "trtype": "tcp", 00:18:10.108 "traddr": "10.0.0.1", 00:18:10.108 "adrfam": "ipv4", 00:18:10.108 "trsvcid": "4420", 00:18:10.108 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:10.108 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:10.108 "prchk_reftag": false, 00:18:10.108 "prchk_guard": false, 00:18:10.108 "hdgst": false, 00:18:10.108 "ddgst": false, 00:18:10.108 "dhchap_key": "key1", 00:18:10.108 "dhchap_ctrlr_key": "ckey2", 00:18:10.108 "method": "bdev_nvme_attach_controller", 00:18:10.108 "req_id": 1 00:18:10.108 } 00:18:10.108 Got JSON-RPC error response 00:18:10.109 response: 00:18:10.109 { 00:18:10.109 "code": -5, 00:18:10.109 "message": "Input/output error" 00:18:10.109 } 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.109 rmmod nvme_tcp 00:18:10.109 rmmod nvme_fabrics 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78806 ']' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78806 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78806 ']' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78806 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78806 00:18:10.109 killing process with pid 78806 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78806' 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78806 00:18:10.109 22:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78806 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:10.368 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:10.627 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:10.627 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:10.627 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:10.627 22:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:11.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:11.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:11.453 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:11.453 22:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oLr /tmp/spdk.key-null.4fr /tmp/spdk.key-sha256.wdG /tmp/spdk.key-sha384.M9U /tmp/spdk.key-sha512.ah2 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:11.453 22:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:11.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:11.712 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.712 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.712 00:18:11.712 real 0m36.583s 00:18:11.712 user 0m32.737s 00:18:11.712 sys 0m4.020s 00:18:11.712 22:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.712 22:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.712 ************************************ 00:18:11.712 END TEST nvmf_auth_host 00:18:11.712 ************************************ 00:18:11.974 22:44:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.974 22:44:29 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:18:11.974 22:44:29 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:11.974 22:44:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.974 22:44:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.974 22:44:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.974 ************************************ 00:18:11.974 START TEST nvmf_digest 00:18:11.974 ************************************ 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:11.974 * Looking for test storage... 00:18:11.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.974 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:11.975 Cannot find device "nvmf_tgt_br" 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.975 Cannot find device "nvmf_tgt_br2" 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:11.975 Cannot find device "nvmf_tgt_br" 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:11.975 Cannot find device "nvmf_tgt_br2" 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:11.975 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:12.233 22:44:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:12.233 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:12.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:18:12.493 00:18:12.493 --- 10.0.0.2 ping statistics --- 00:18:12.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.493 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:12.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:12.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:18:12.493 00:18:12.493 --- 10.0.0.3 ping statistics --- 00:18:12.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.493 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:12.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:12.493 00:18:12.493 --- 10.0.0.1 ping statistics --- 00:18:12.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.493 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:12.493 ************************************ 00:18:12.493 START TEST nvmf_digest_clean 00:18:12.493 ************************************ 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80386 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80386 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80386 ']' 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.493 22:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:12.493 [2024-07-15 22:44:30.179493] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:12.493 [2024-07-15 22:44:30.179587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.493 [2024-07-15 22:44:30.321200] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.752 [2024-07-15 22:44:30.469687] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.752 [2024-07-15 22:44:30.469775] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.752 [2024-07-15 22:44:30.469789] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.752 [2024-07-15 22:44:30.469798] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.752 [2024-07-15 22:44:30.469805] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.752 [2024-07-15 22:44:30.469842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:13.689 [2024-07-15 22:44:31.294417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.689 null0 00:18:13.689 [2024-07-15 22:44:31.358140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.689 [2024-07-15 22:44:31.382338] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80418 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80418 /var/tmp/bperf.sock 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80418 ']' 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.689 22:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:13.689 [2024-07-15 22:44:31.444185] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:13.689 [2024-07-15 22:44:31.444307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80418 ] 00:18:13.948 [2024-07-15 22:44:31.585479] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.948 [2024-07-15 22:44:31.745521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.883 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.883 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:14.883 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:14.883 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:14.883 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:15.140 [2024-07-15 22:44:32.740746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.140 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.140 22:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.398 nvme0n1 00:18:15.398 22:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:15.398 22:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:15.656 Running I/O for 2 seconds... 00:18:17.553 00:18:17.553 Latency(us) 00:18:17.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:17.553 nvme0n1 : 2.00 14710.15 57.46 0.00 0.00 8694.33 2234.18 20971.52 00:18:17.553 =================================================================================================================== 00:18:17.553 Total : 14710.15 57.46 0.00 0.00 8694.33 2234.18 20971.52 00:18:17.553 0 00:18:17.553 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:17.553 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:17.553 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:17.553 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:17.553 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:17.553 | select(.opcode=="crc32c") 00:18:17.553 | "\(.module_name) \(.executed)"' 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80418 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80418 ']' 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80418 00:18:17.811 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:18.069 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.069 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80418 00:18:18.069 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:18.070 killing process with pid 80418 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80418' 00:18:18.070 Received shutdown signal, test time was about 2.000000 seconds 00:18:18.070 00:18:18.070 Latency(us) 00:18:18.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.070 =================================================================================================================== 00:18:18.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80418 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80418 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80484 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80484 /var/tmp/bperf.sock 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80484 ']' 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.070 22:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:18.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:18.326 Zero copy mechanism will not be used. 00:18:18.326 [2024-07-15 22:44:35.928287] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:18.326 [2024-07-15 22:44:35.928364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80484 ] 00:18:18.326 [2024-07-15 22:44:36.061070] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.590 [2024-07-15 22:44:36.167350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.152 22:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.152 22:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:19.152 22:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:19.152 22:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:19.152 22:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:19.408 [2024-07-15 22:44:37.152428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:19.408 22:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.408 22:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.011 nvme0n1 00:18:20.011 22:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:20.011 22:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:20.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.011 Zero copy mechanism will not be used. 00:18:20.011 Running I/O for 2 seconds... 00:18:21.911 00:18:21.911 Latency(us) 00:18:21.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.911 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:21.911 nvme0n1 : 2.00 6633.23 829.15 0.00 0.00 2408.56 2308.65 6047.19 00:18:21.911 =================================================================================================================== 00:18:21.911 Total : 6633.23 829.15 0.00 0.00 2408.56 2308.65 6047.19 00:18:21.911 0 00:18:21.911 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:21.911 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:21.911 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:21.911 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:21.911 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:21.911 | select(.opcode=="crc32c") 00:18:21.911 | "\(.module_name) \(.executed)"' 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80484 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80484 ']' 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80484 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80484 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.169 killing process with pid 80484 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80484' 00:18:22.169 Received shutdown signal, test time was about 2.000000 seconds 00:18:22.169 00:18:22.169 Latency(us) 00:18:22.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.169 =================================================================================================================== 00:18:22.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80484 00:18:22.169 22:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80484 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80543 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80543 /var/tmp/bperf.sock 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80543 ']' 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.426 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:22.427 22:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:22.427 [2024-07-15 22:44:40.252065] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:22.427 [2024-07-15 22:44:40.252158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80543 ] 00:18:22.683 [2024-07-15 22:44:40.391501] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.683 [2024-07-15 22:44:40.501339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.615 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.615 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:23.615 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:23.615 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:23.615 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:23.884 [2024-07-15 22:44:41.582435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:23.884 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.884 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:24.156 nvme0n1 00:18:24.156 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:24.156 22:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:24.414 Running I/O for 2 seconds... 00:18:26.315 00:18:26.315 Latency(us) 00:18:26.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.315 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.315 nvme0n1 : 2.01 15483.82 60.48 0.00 0.00 8257.39 2621.44 18350.08 00:18:26.315 =================================================================================================================== 00:18:26.315 Total : 15483.82 60.48 0.00 0.00 8257.39 2621.44 18350.08 00:18:26.315 0 00:18:26.315 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:26.315 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:26.315 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:26.315 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:26.315 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:26.315 | select(.opcode=="crc32c") 00:18:26.315 | "\(.module_name) \(.executed)"' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80543 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80543 ']' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80543 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80543 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.881 killing process with pid 80543 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80543' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80543 00:18:26.881 Received shutdown signal, test time was about 2.000000 seconds 00:18:26.881 00:18:26.881 Latency(us) 00:18:26.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.881 =================================================================================================================== 00:18:26.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80543 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80599 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80599 /var/tmp/bperf.sock 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80599 ']' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.881 22:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:27.140 [2024-07-15 22:44:44.740755] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:27.140 [2024-07-15 22:44:44.740895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80599 ] 00:18:27.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:27.140 Zero copy mechanism will not be used. 00:18:27.140 [2024-07-15 22:44:44.880900] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.397 [2024-07-15 22:44:44.998468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.964 22:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.964 22:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:27.964 22:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:27.964 22:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:27.964 22:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:28.223 [2024-07-15 22:44:45.976618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.223 22:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.223 22:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.789 nvme0n1 00:18:28.789 22:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:28.789 22:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:28.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:28.789 Zero copy mechanism will not be used. 00:18:28.789 Running I/O for 2 seconds... 00:18:30.692 00:18:30.692 Latency(us) 00:18:30.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.692 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:30.692 nvme0n1 : 2.00 5943.22 742.90 0.00 0.00 2684.60 2383.13 6553.60 00:18:30.692 =================================================================================================================== 00:18:30.692 Total : 5943.22 742.90 0.00 0.00 2684.60 2383.13 6553.60 00:18:30.692 0 00:18:30.692 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:30.692 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:30.692 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:30.692 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:30.692 | select(.opcode=="crc32c") 00:18:30.692 | "\(.module_name) \(.executed)"' 00:18:30.692 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80599 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80599 ']' 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80599 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80599 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.951 killing process with pid 80599 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80599' 00:18:30.951 Received shutdown signal, test time was about 2.000000 seconds 00:18:30.951 00:18:30.951 Latency(us) 00:18:30.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.951 =================================================================================================================== 00:18:30.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80599 00:18:30.951 22:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80599 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80386 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80386 ']' 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80386 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80386 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:31.239 killing process with pid 80386 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80386' 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80386 00:18:31.239 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80386 00:18:31.497 00:18:31.497 real 0m19.160s 00:18:31.497 user 0m36.768s 00:18:31.497 sys 0m5.254s 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:31.497 ************************************ 00:18:31.497 END TEST nvmf_digest_clean 00:18:31.497 ************************************ 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:31.497 22:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.498 22:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:31.757 ************************************ 00:18:31.757 START TEST nvmf_digest_error 00:18:31.757 ************************************ 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80688 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80688 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80688 ']' 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.757 22:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.757 [2024-07-15 22:44:49.395611] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:31.757 [2024-07-15 22:44:49.395713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.757 [2024-07-15 22:44:49.533192] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.015 [2024-07-15 22:44:49.651709] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.015 [2024-07-15 22:44:49.651766] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.015 [2024-07-15 22:44:49.651777] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.015 [2024-07-15 22:44:49.651793] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.015 [2024-07-15 22:44:49.651800] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.015 [2024-07-15 22:44:49.651832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.840 [2024-07-15 22:44:50.420383] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.840 [2024-07-15 22:44:50.482028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:32.840 null0 00:18:32.840 [2024-07-15 22:44:50.532581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.840 [2024-07-15 22:44:50.556704] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80720 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80720 /var/tmp/bperf.sock 00:18:32.840 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80720 ']' 00:18:32.841 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:32.841 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.841 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:32.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:32.841 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.841 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.841 [2024-07-15 22:44:50.611009] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:32.841 [2024-07-15 22:44:50.611080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80720 ] 00:18:33.099 [2024-07-15 22:44:50.748562] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.099 [2024-07-15 22:44:50.864709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.099 [2024-07-15 22:44:50.918170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:33.357 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.357 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:33.357 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.357 22:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.615 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.874 nvme0n1 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:33.874 22:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:33.874 Running I/O for 2 seconds... 00:18:34.132 [2024-07-15 22:44:51.717911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.717968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.717984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.734769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.734820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.751605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.751643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.751657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.768405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.768458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.768471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.785103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.785156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.801764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.801816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.132 [2024-07-15 22:44:51.801829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.132 [2024-07-15 22:44:51.818393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.132 [2024-07-15 22:44:51.818429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.835454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.835510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.835524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.852283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.852322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.869147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.869190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.869203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.886056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.886099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.886112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.902938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.902994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.919797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.919847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.919861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.936708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.936751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.936764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.133 [2024-07-15 22:44:51.954387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.133 [2024-07-15 22:44:51.954442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.133 [2024-07-15 22:44:51.954457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:51.971357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:51.971398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:51.971412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:51.988380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:51.988420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:51.988434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:52.005306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:52.005348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:52.005362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:52.022211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:52.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:52.022262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:52.039128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:52.039169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:52.039182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:52.056143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.391 [2024-07-15 22:44:52.056200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.391 [2024-07-15 22:44:52.056214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.391 [2024-07-15 22:44:52.073179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.073224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.073238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.090254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.090301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.090315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.107298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.107353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.107367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.124339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.124377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.124390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.141324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.141378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.141391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.158310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.158346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.158360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.175283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.175319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.175332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.192171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.192208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.192221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.392 [2024-07-15 22:44:52.209013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.392 [2024-07-15 22:44:52.209049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.392 [2024-07-15 22:44:52.209061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.225796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.225833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.225845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.242649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.242687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.242700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.259467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.259504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.276314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.276350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.276364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.293577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.293623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.293638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.310531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.310571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.310585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.327389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.327426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.327438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.344279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.344319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.344332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.361106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.361157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.378052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.378105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.378118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.395053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.395098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.395111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.412085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.412138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.412152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.429101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.429139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.429152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.446235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.446271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.446284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.463059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.463111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.463125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.651 [2024-07-15 22:44:52.479938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.651 [2024-07-15 22:44:52.479974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.651 [2024-07-15 22:44:52.479987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.496742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.910 [2024-07-15 22:44:52.496808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.513606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.513658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.910 [2024-07-15 22:44:52.513672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.530492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.530528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.910 [2024-07-15 22:44:52.530540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.547587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.547625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.910 [2024-07-15 22:44:52.547639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.564394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.564430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.910 [2024-07-15 22:44:52.564443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.910 [2024-07-15 22:44:52.581167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.910 [2024-07-15 22:44:52.581204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.581216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.598059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.598096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.598109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.614939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.614991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.615005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.631844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.631906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.631919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.648733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.648800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.665515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.665580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.682442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.682477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.682490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.699380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.699432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.699445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.716172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.716223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.716236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.911 [2024-07-15 22:44:52.733011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:34.911 [2024-07-15 22:44:52.733047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.911 [2024-07-15 22:44:52.733059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.749983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.750019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.750031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.766804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.766839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.766852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.791083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.791123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.791136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.807853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.807898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.807912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.824690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.824727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.824739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.841527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.841563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.858368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.858405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.858418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.875190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.875226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.875239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.892058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.892110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.892122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.909170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.909207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.909219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.926309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.926345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.926357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.943702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.943739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.961016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.961073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.978305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.978341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.978354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.170 [2024-07-15 22:44:52.995527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.170 [2024-07-15 22:44:52.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.170 [2024-07-15 22:44:52.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.012829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.429 [2024-07-15 22:44:53.012890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.029913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.029948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.429 [2024-07-15 22:44:53.029961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.046947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.429 [2024-07-15 22:44:53.046997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.063927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.063965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.429 [2024-07-15 22:44:53.063978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.080985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.081022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.429 [2024-07-15 22:44:53.081035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.429 [2024-07-15 22:44:53.097924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.429 [2024-07-15 22:44:53.097959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.097972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.114863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.114915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.114929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.131700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.131740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.131753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.149426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.149466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.149479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.166521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.166562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.166576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.183629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.183669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.200662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.200702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.200715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.217837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.217889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.234751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.234793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.430 [2024-07-15 22:44:53.252348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.430 [2024-07-15 22:44:53.252431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.430 [2024-07-15 22:44:53.252443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.269442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.269489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.269503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.286625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.286702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.286716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.304378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.304456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.304471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.322037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.322095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.322110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.339611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.339660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.339674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.357270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.357398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.374862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.374922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.374938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.392310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.392375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.392389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.409747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.688 [2024-07-15 22:44:53.409801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-07-15 22:44:53.409815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.688 [2024-07-15 22:44:53.426994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.427043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.427057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.689 [2024-07-15 22:44:53.444105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.444167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.444180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.689 [2024-07-15 22:44:53.461216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.461264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.461277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.689 [2024-07-15 22:44:53.477783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.477827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.477840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.689 [2024-07-15 22:44:53.494161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.494229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.494242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.689 [2024-07-15 22:44:53.510627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.689 [2024-07-15 22:44:53.510668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-07-15 22:44:53.510681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.527866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.527927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.544977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.545025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.545039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.561616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.561660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.561674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.578116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.578160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.578173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.594813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.594874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.611526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.611584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.611597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.628594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.628655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.628668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.645232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.645292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.645305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.661500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.661561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.661574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.677717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.677792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.677804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 [2024-07-15 22:44:53.693523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1fe10) 00:18:35.948 [2024-07-15 22:44:53.693582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.948 [2024-07-15 22:44:53.693594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.948 00:18:35.949 Latency(us) 00:18:35.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:35.949 nvme0n1 : 2.00 14903.01 58.21 0.00 0.00 8582.42 7417.48 32648.84 00:18:35.949 =================================================================================================================== 00:18:35.949 Total : 14903.01 58.21 0.00 0.00 8582.42 7417.48 32648.84 00:18:35.949 0 00:18:35.949 22:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:35.949 22:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:35.949 | .driver_specific 00:18:35.949 | .nvme_error 00:18:35.949 | .status_code 00:18:35.949 | .command_transient_transport_error' 00:18:35.949 22:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:35.949 22:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80720 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80720 ']' 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80720 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80720 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.207 killing process with pid 80720 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80720' 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80720 00:18:36.207 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.207 00:18:36.207 Latency(us) 00:18:36.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.207 =================================================================================================================== 00:18:36.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.207 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80720 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80773 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80773 /var/tmp/bperf.sock 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80773 ']' 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.466 22:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:36.725 [2024-07-15 22:44:54.319607] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:36.725 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:36.725 Zero copy mechanism will not be used. 00:18:36.725 [2024-07-15 22:44:54.319724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80773 ] 00:18:36.725 [2024-07-15 22:44:54.450482] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.984 [2024-07-15 22:44:54.568072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.984 [2024-07-15 22:44:54.621874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:37.550 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.550 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:37.550 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.550 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.809 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.133 nvme0n1 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:38.133 22:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:38.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.424 Zero copy mechanism will not be used. 00:18:38.424 Running I/O for 2 seconds... 00:18:38.424 [2024-07-15 22:44:56.038957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.039532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.043735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.043855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.043973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.048273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.048390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.048472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.052641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.052770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.057083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.057199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.057281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.061421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.061538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.061624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.065922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.066042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.066122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.070272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.070391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.070466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.074520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.074717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.079049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.079168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.079254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.083312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.083429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.083511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.087653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.087773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.087876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.092129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.092246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.092328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.096511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.424 [2024-07-15 22:44:56.096625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.424 [2024-07-15 22:44:56.096711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.424 [2024-07-15 22:44:56.100964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.101092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.101171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.105601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.105717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.105799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.110261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.110469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.115101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.115368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.119878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.120011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.120099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.124436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.124558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.124662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.129143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.129262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.129358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.133921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.134085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.134210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.138755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.138943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.138967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.143199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.143253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.147623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.147662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.147675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.151983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.152021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.152034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.156407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.156458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.160712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.160751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.160764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.165137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.165175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.165187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.169297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.169333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.169346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.173425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.173463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.173477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.177808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.177846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.177860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.182353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.182393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.182406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.186592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.186629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.186641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.190953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.190990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.191003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.195406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.195441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.199667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.199704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.199717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.203841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.203921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.208079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.208114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.208126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.212239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.212275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.212288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.216391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.216442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.216455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.220714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.220749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.225014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.225051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.425 [2024-07-15 22:44:56.225063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.425 [2024-07-15 22:44:56.229335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.425 [2024-07-15 22:44:56.229371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.229384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.233625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.233676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.233707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.238098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.238135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.238148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.242423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.242461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.242475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.246739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.246779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.246792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.251100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.251139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.251151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.426 [2024-07-15 22:44:56.255441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.426 [2024-07-15 22:44:56.255480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.426 [2024-07-15 22:44:56.255494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.685 [2024-07-15 22:44:56.259797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.685 [2024-07-15 22:44:56.259837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.685 [2024-07-15 22:44:56.259850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.685 [2024-07-15 22:44:56.264204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.685 [2024-07-15 22:44:56.264272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.685 [2024-07-15 22:44:56.264285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.685 [2024-07-15 22:44:56.268646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.685 [2024-07-15 22:44:56.268685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.685 [2024-07-15 22:44:56.268698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.685 [2024-07-15 22:44:56.273028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.685 [2024-07-15 22:44:56.273065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.685 [2024-07-15 22:44:56.273078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.685 [2024-07-15 22:44:56.277450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.685 [2024-07-15 22:44:56.277487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.685 [2024-07-15 22:44:56.277500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.281939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.281976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.281990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.286426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.286466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.286480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.290809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.290846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.290859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.294857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.294904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.294917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.298851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.298896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.298909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.303053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.303089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.303103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.307146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.307182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.307195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.311431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.311468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.311481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.315938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.315974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.315988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.320319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.320355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.320384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.324794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.324831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.324844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.329109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.329143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.329155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.333429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.333480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.337710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.337747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.337760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.342074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.342109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.342121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.346393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.346430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.346444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.350835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.350889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.350921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.355159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.355195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.355208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.360141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.360197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.360220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.365326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.365368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.686 [2024-07-15 22:44:56.365382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.686 [2024-07-15 22:44:56.370043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.686 [2024-07-15 22:44:56.370086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.370101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.374441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.374498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.374511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.379068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.379109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.379124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.383563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.383600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.387995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.388033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.388046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.392093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.392130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.392142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.396221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.396258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.400383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.400420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.400433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.404480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.404517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.404529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.408714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.408754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.408769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.412973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.413011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.413024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.417110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.417159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.421552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.421589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.421603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.425959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.426011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.426024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.430363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.430402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.434708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.434745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.434759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.439149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.439184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.439197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.443565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.443600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.443612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.447899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.447946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.447960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.452375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.452414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.452427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.456717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.456754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.456768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.461175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.461211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.687 [2024-07-15 22:44:56.461224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.687 [2024-07-15 22:44:56.465566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.687 [2024-07-15 22:44:56.465604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.465617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.469961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.469999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.470013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.474308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.474345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.478707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.478760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.478773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.483080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.483115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.487405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.487441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.487455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.491798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.491836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.491850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.496174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.496211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.496223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.500407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.500443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.500456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.504700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.504737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.504750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.509169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.509205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.509218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.513500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.513536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.513550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.688 [2024-07-15 22:44:56.518013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.688 [2024-07-15 22:44:56.518052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.688 [2024-07-15 22:44:56.518064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.522415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.522455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.522470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.526759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.526796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.526808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.531177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.531227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.535518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.535555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.535568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.539680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.539718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.544011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.544048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.544060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.548321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.548369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.552549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.552586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.552600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.556789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.556826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.556840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.561138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.561175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.565343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.565381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.565394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.569688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.569726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.569739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.573963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.573998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.948 [2024-07-15 22:44:56.574011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.948 [2024-07-15 22:44:56.578404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.948 [2024-07-15 22:44:56.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.578454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.582643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.582681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.582694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.587132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.587174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.587189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.592437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.592482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.596919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.596968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.596984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.601492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.601534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.601548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.606042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.606089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.606103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.610396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.610434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.610447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.614977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.615017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.615031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.619185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.619223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.619236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.623469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.623507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.623520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.627966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.628012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.628026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.632347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.632400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.632413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.636619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.636670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.641113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.641154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.641167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.645293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.645332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.645345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.649623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.649677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.649691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.654137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.654202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.658473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.658525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.662898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.662948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.662970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.667414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.667455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.667469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.671944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.949 [2024-07-15 22:44:56.671982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.949 [2024-07-15 22:44:56.671995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.949 [2024-07-15 22:44:56.676246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.676295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.676320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.680509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.680548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.680561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.684844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.684896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.684910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.689287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.689323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.689336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.693674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.693714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.693727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.697914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.697951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.697964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.702252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.702291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.702304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.706699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.706739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.706754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.711019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.711056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.711069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.715308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.715345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.715372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.719678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.719718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.719732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.723860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.723906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.723919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.728145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.728195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.732740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.732780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.732793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.737114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.737152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.737166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.741475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.741510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.741522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.745707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.745745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.745758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.749925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.749960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.749973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.753902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.753935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.757828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.757877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.757891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.761975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.950 [2024-07-15 22:44:56.762010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.950 [2024-07-15 22:44:56.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.950 [2024-07-15 22:44:56.766136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.951 [2024-07-15 22:44:56.766181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.951 [2024-07-15 22:44:56.766211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.951 [2024-07-15 22:44:56.770482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.951 [2024-07-15 22:44:56.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.951 [2024-07-15 22:44:56.770567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.951 [2024-07-15 22:44:56.774877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.951 [2024-07-15 22:44:56.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.951 [2024-07-15 22:44:56.774936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.951 [2024-07-15 22:44:56.779132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:38.951 [2024-07-15 22:44:56.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.951 [2024-07-15 22:44:56.779194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.783548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.783585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.783598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.787920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.787968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.792229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.792264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.792277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.796293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.796329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.796342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.800627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.800682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.804995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.805033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.805046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.809286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.809334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.813477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.813514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.813526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.817499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.817536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.817548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.821573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.821610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.821622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.825718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.825755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.825768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.829658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.829694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.829706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.833795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.833833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.833845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.837852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.837913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.841886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.841922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.841934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.845964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.845999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.846012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.850370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.850417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.850430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.854626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.854677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.854689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.210 [2024-07-15 22:44:56.858864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.210 [2024-07-15 22:44:56.858911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.210 [2024-07-15 22:44:56.858924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.863214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.863249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.863262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.867378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.867413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.867425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.871637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.871672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.871684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.875887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.875933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.875948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.880440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.880475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.880487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.884770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.884808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.889000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.889034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.889047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.893196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.893244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.897416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.897453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.897465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.901672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.901709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.901722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.906232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.906270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.906283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.910808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.910861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.914965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.915000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.915012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.918975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.919024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.922928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.922963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.922976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.926891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.926937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.926949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.930845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.930892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.930905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.934744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.934781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.934794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.938752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.938788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.938800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.942975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.943010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.943022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.947013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.947047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.947060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.951288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.951323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.951336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.955441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.955476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.955488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.959518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.959554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.959566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.963587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.963621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.963633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.967620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.967655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.967667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.971725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.971776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.971790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.975925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.211 [2024-07-15 22:44:56.975961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.211 [2024-07-15 22:44:56.975974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.211 [2024-07-15 22:44:56.980239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:56.980275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:56.980288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:56.984503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:56.984541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:56.984554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:56.988658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:56.988694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:56.988707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:56.993191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:56.993228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:56.993241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:56.997515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:56.997553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:56.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.001853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.001899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.001913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.006338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.006375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.006388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.010756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.010823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.010836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.015183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.015218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.015231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.019583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.019620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.019633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.024064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.024131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.028462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.028514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.028527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.032818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.032855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.032882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.037178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.037214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.037228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.212 [2024-07-15 22:44:57.041628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.212 [2024-07-15 22:44:57.041665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.212 [2024-07-15 22:44:57.041678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.471 [2024-07-15 22:44:57.045923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.045960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.045973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.050245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.050288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.050302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.054619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.054656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.054669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.058861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.058909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.058923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.063112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.063159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.067561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.067597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.067610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.071983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.072029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.076490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.076527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.076541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.080796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.080857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.085194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.085230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.085243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.089589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.089641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.089654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.094061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.094098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.094111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.098432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.098469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.098482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.102745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.102783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.102796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.107073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.107111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.107124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.111362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.111398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.111411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.115851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.120144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.120182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.120196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.124403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.124442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.124455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.128621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.128659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.128672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.132974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.133011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.133024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.137326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.137364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.137377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.141600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.141637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.141650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.145824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.145862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.145888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.150268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.150305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.150318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.154702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.154752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.158964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.159002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.159015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.163155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.163191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.163204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.167593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.472 [2024-07-15 22:44:57.167629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.472 [2024-07-15 22:44:57.167642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.472 [2024-07-15 22:44:57.171977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.172025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.176126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.176177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.180361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.180399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.180413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.184717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.184759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.184772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.189066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.189111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.189125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.193460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.193502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.193515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.197867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.197916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.197930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.202240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.202288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.202302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.206576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.206613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.206626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.210959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.210997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.211010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.215197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.215233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.215246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.219592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.219631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.219644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.223844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.223910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.223924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.228100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.228150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.228162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.232337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.232373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.232385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.236486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.236522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.236534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.240739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.240775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.240787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.245001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.245037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.245050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.249122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.249171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.253144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.253177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.253190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.257493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.257527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.257539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.261708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.261744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.266064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.266101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.266114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.270557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.270606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.270619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.275116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.275152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.275166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.279480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.279516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.279530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.283939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.283974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.283987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.288477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.288513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.473 [2024-07-15 22:44:57.288525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.473 [2024-07-15 22:44:57.292926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.473 [2024-07-15 22:44:57.292962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.474 [2024-07-15 22:44:57.292975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.474 [2024-07-15 22:44:57.297315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.474 [2024-07-15 22:44:57.297351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.474 [2024-07-15 22:44:57.297363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.474 [2024-07-15 22:44:57.301724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.474 [2024-07-15 22:44:57.301779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.474 [2024-07-15 22:44:57.301802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.306333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.306381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.306395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.310817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.310858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.310884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.315290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.315326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.315354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.319649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.319703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.324148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.324189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.324202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.328430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.328467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.328480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.332748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.332784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.336946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.336983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.336996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.341184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.341220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.341233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.345456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.345492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.345505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.349598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.349634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.349646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.353951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.353986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.358156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.358215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.358229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.362501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.362551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.362563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.366822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.366857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.366880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.371256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.371292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.371306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.375620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.375658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.375671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.733 [2024-07-15 22:44:57.380353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.733 [2024-07-15 22:44:57.380396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.733 [2024-07-15 22:44:57.380411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.384988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.385042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.385064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.389597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.389637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.389650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.394084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.394126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.398700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.398741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.398754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.403271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.403327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.403340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.407766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.407807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.407821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.412450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.412489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.412502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.416955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.416993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.417007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.421234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.421270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.421283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.425501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.425537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.425550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.429768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.429808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.429821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.433943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.433978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.438260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.438302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.438315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.442539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.442574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.442586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.446610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.446645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.446657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.450706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.450760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.455233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.455271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.455284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.459591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.459627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.459640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.463967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.464003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.464017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.468609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.468661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.468689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.473147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.473182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.473194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.477456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.477493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.477506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.481840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.481889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.481903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.486260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.486298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.490362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.490402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.490416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.494819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.494856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.494880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.498840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.498885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.498899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.503155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.503189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.503201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.734 [2024-07-15 22:44:57.507224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.734 [2024-07-15 22:44:57.507259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.734 [2024-07-15 22:44:57.507272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.511649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.511683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.511695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.516086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.516121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.516133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.520574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.520613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.520627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.525191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.525228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.525241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.529644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.529702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.529716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.534140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.534207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.534238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.538596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.538633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.538645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.543024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.543061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.543088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.547140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.547189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.547201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.551502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.551537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.551550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.555923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.555983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.555997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.560064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.560106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.560119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.735 [2024-07-15 22:44:57.564242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.735 [2024-07-15 22:44:57.564286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.735 [2024-07-15 22:44:57.564299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.568508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.568544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.568556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.572689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.572724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.572736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.576895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.576939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.576952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.580731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.580766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.580778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.584651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.584687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.584699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.588795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.588831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.588845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.592996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.593032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.593045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.597497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.597546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.597560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.601963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.602014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.606462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.606522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.610785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.610823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.610836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.615194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.615247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.615260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.619657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.995 [2024-07-15 22:44:57.619697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.995 [2024-07-15 22:44:57.619710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.995 [2024-07-15 22:44:57.624067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.624134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.624147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.628406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.628444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.628457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.632838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.632888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.632901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.637212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.637249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.637262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.641663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.641702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.641715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.646231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.646273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.650599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.650635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.650648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.655019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.655057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.655070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.659268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.659304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.659317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.663423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.663460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.663472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.667443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.667480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.667494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.671756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.671792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.671805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.676008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.676044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.676057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.680093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.680129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.680141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.684274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.684308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.684321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.688462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.688498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.688511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.692670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.692705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.696911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.696946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.696959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.700939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.700974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.700986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.705216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.705251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.705264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.709458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.709511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.709524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.713582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.713619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.713632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.717813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.717850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.717863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.722027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.722063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.722075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.726238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.726276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.996 [2024-07-15 22:44:57.726289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.996 [2024-07-15 22:44:57.730446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.996 [2024-07-15 22:44:57.730481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.730494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.734758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.734796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.734810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.739133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.739194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.739208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.743686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.743722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.743750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.748204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.748269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.748294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.753379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.753439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.758034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.758078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.758093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.762579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.762624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.762637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.767021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.767065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.767079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.771456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.771492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.771505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.775992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.776032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.776046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.780274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.780311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.780325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.784378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.784415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.784427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.788817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.788858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.788884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.793158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.793194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.797409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.797445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.797458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.801831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.805832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.805879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.805893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.809967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.810003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.810017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.814394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.814435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.814448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.818545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.818583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.818595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.822670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.822708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.822720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.997 [2024-07-15 22:44:57.827153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:39.997 [2024-07-15 22:44:57.827191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.997 [2024-07-15 22:44:57.827204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.831271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.831309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.831322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.835631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.835669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.835682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.840064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.840103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.840118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.844377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.844415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.844429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.848760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.848811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.853323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.853378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.857709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.857750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.857763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.862002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.862042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.862057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.866636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.866688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.866702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.871286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.258 [2024-07-15 22:44:57.871322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.258 [2024-07-15 22:44:57.871334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.258 [2024-07-15 22:44:57.875702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.875739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.875753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.880172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.880211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.880225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.884483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.884532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.888651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.888687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.888699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.893039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.893077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.893090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.897359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.897394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.897407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.901741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.901777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.901789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.906014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.906064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.910475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.910539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.910552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.914994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.915030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.915043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.919383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.919435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.919448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.923878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.923925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.923940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.928385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.928421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.928434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.932950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.932990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.933005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.937367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.937403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.937416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.941756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.941806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.946126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.946161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.946183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.950607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.950661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.950687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.954962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.954996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.955009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.959122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.959158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.959170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.963315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.963351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.963363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.967570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.967605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.967618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.971758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.971794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.971807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.975934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.975981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.980022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.980057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.980070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.984169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.984206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.984220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.988421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.988458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.988471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.992803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.992839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.992852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:57.997161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.259 [2024-07-15 22:44:57.997196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.259 [2024-07-15 22:44:57.997209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.259 [2024-07-15 22:44:58.001603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.001638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.001650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.006058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.006094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.006108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.010569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.010616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.014896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.014941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.014955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.019322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.019358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.019371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.023686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.023725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.023738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.260 [2024-07-15 22:44:58.028103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ef30) 00:18:40.260 [2024-07-15 22:44:58.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.260 [2024-07-15 22:44:58.028152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.260 00:18:40.260 Latency(us) 00:18:40.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.260 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:40.260 nvme0n1 : 2.00 7112.18 889.02 0.00 0.00 2245.85 1824.58 9234.62 00:18:40.260 =================================================================================================================== 00:18:40.260 Total : 7112.18 889.02 0.00 0.00 2245.85 1824.58 9234.62 00:18:40.260 0 00:18:40.260 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:40.260 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:40.260 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:40.260 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:40.260 | .driver_specific 00:18:40.260 | .nvme_error 00:18:40.260 | .status_code 00:18:40.260 | .command_transient_transport_error' 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 459 > 0 )) 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80773 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80773 ']' 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80773 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80773 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.829 killing process with pid 80773 00:18:40.829 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80773' 00:18:40.829 Received shutdown signal, test time was about 2.000000 seconds 00:18:40.829 00:18:40.829 Latency(us) 00:18:40.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.830 =================================================================================================================== 00:18:40.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80773 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80773 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80833 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80833 /var/tmp/bperf.sock 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80833 ']' 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.830 22:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.089 [2024-07-15 22:44:58.676852] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:41.089 [2024-07-15 22:44:58.676961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80833 ] 00:18:41.089 [2024-07-15 22:44:58.812276] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.347 [2024-07-15 22:44:58.931562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.347 [2024-07-15 22:44:58.986107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:41.912 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.912 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:41.912 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:41.912 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.195 22:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.454 nvme0n1 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:42.454 22:45:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:42.713 Running I/O for 2 seconds... 00:18:42.713 [2024-07-15 22:45:00.370144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fef90 00:18:42.713 [2024-07-15 22:45:00.372641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.713 [2024-07-15 22:45:00.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.713 [2024-07-15 22:45:00.385721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190feb58 00:18:42.713 [2024-07-15 22:45:00.388277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.713 [2024-07-15 22:45:00.388329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.713 [2024-07-15 22:45:00.401649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fe2e8 00:18:42.713 [2024-07-15 22:45:00.404162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.713 [2024-07-15 22:45:00.404214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.713 [2024-07-15 22:45:00.417321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fda78 00:18:42.713 [2024-07-15 22:45:00.419789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.419842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.433232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fd208 00:18:42.714 [2024-07-15 22:45:00.435681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.435733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.449148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fc998 00:18:42.714 [2024-07-15 22:45:00.451592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.451645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.465351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fc128 00:18:42.714 [2024-07-15 22:45:00.467829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.467900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.481581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fb8b8 00:18:42.714 [2024-07-15 22:45:00.484032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.484068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.497630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fb048 00:18:42.714 [2024-07-15 22:45:00.500038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.500089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.513866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fa7d8 00:18:42.714 [2024-07-15 22:45:00.516258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.516309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.530001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f9f68 00:18:42.714 [2024-07-15 22:45:00.532375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.714 [2024-07-15 22:45:00.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.714 [2024-07-15 22:45:00.545919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f96f8 00:18:42.973 [2024-07-15 22:45:00.548295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.548331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.562136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f8e88 00:18:42.973 [2024-07-15 22:45:00.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.564455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.578067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f8618 00:18:42.973 [2024-07-15 22:45:00.580345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.580395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.593896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f7da8 00:18:42.973 [2024-07-15 22:45:00.596137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.596172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.609676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f7538 00:18:42.973 [2024-07-15 22:45:00.611910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.611945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.625455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f6cc8 00:18:42.973 [2024-07-15 22:45:00.627671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.627708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.641287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f6458 00:18:42.973 [2024-07-15 22:45:00.643470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.643506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.657080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f5be8 00:18:42.973 [2024-07-15 22:45:00.659257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.672850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f5378 00:18:42.973 [2024-07-15 22:45:00.675005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.675041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.688647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f4b08 00:18:42.973 [2024-07-15 22:45:00.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.690808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.704418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f4298 00:18:42.973 [2024-07-15 22:45:00.706526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.720184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f3a28 00:18:42.973 [2024-07-15 22:45:00.722268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.722303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.735953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f31b8 00:18:42.973 [2024-07-15 22:45:00.738014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.738049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.751702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f2948 00:18:42.973 [2024-07-15 22:45:00.753733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.753766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.767523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f20d8 00:18:42.973 [2024-07-15 22:45:00.769562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.769598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.783479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f1868 00:18:42.973 [2024-07-15 22:45:00.785470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.785501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.973 [2024-07-15 22:45:00.799249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f0ff8 00:18:42.973 [2024-07-15 22:45:00.801221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.973 [2024-07-15 22:45:00.801254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.815053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f0788 00:18:43.233 [2024-07-15 22:45:00.817003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.830939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eff18 00:18:43.233 [2024-07-15 22:45:00.832884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.832915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.846744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ef6a8 00:18:43.233 [2024-07-15 22:45:00.848663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.848695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.862616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eee38 00:18:43.233 [2024-07-15 22:45:00.864515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.864550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.878410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ee5c8 00:18:43.233 [2024-07-15 22:45:00.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.880316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.894151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190edd58 00:18:43.233 [2024-07-15 22:45:00.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.896052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.909950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ed4e8 00:18:43.233 [2024-07-15 22:45:00.911786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.911818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.925759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ecc78 00:18:43.233 [2024-07-15 22:45:00.927588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.927620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.941569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ec408 00:18:43.233 [2024-07-15 22:45:00.943380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.943412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.957336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ebb98 00:18:43.233 [2024-07-15 22:45:00.959124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.959155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.973160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eb328 00:18:43.233 [2024-07-15 22:45:00.974937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:00.988952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eaab8 00:18:43.233 [2024-07-15 22:45:00.990698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:00.990735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:01.004708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ea248 00:18:43.233 [2024-07-15 22:45:01.006473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:01.006510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:01.020605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e99d8 00:18:43.233 [2024-07-15 22:45:01.022349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:01.022385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:01.036495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e9168 00:18:43.233 [2024-07-15 22:45:01.038204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:01.038240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:43.233 [2024-07-15 22:45:01.052278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e88f8 00:18:43.233 [2024-07-15 22:45:01.053946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.233 [2024-07-15 22:45:01.053979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:43.492 [2024-07-15 22:45:01.068095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e8088 00:18:43.492 [2024-07-15 22:45:01.069752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.069787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.083956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e7818 00:18:43.493 [2024-07-15 22:45:01.085582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.099718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e6fa8 00:18:43.493 [2024-07-15 22:45:01.101334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.101368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.115520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e6738 00:18:43.493 [2024-07-15 22:45:01.117116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.117150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.131305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e5ec8 00:18:43.493 [2024-07-15 22:45:01.132892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.132925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.147100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e5658 00:18:43.493 [2024-07-15 22:45:01.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.148682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.163027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e4de8 00:18:43.493 [2024-07-15 22:45:01.164552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.164587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.178961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e4578 00:18:43.493 [2024-07-15 22:45:01.180474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.180512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.194779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e3d08 00:18:43.493 [2024-07-15 22:45:01.196321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.196354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.210887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e3498 00:18:43.493 [2024-07-15 22:45:01.212396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.212429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.226695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e2c28 00:18:43.493 [2024-07-15 22:45:01.228171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.228205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.242518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e23b8 00:18:43.493 [2024-07-15 22:45:01.243976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.244010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.258323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e1b48 00:18:43.493 [2024-07-15 22:45:01.259752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.259788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.274131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e12d8 00:18:43.493 [2024-07-15 22:45:01.275525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.275563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.289935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e0a68 00:18:43.493 [2024-07-15 22:45:01.291311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.291348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.305724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e01f8 00:18:43.493 [2024-07-15 22:45:01.307091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.307126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:43.493 [2024-07-15 22:45:01.321524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190df988 00:18:43.493 [2024-07-15 22:45:01.322883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.493 [2024-07-15 22:45:01.322919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:43.752 [2024-07-15 22:45:01.337308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190df118 00:18:43.752 [2024-07-15 22:45:01.338634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.752 [2024-07-15 22:45:01.338670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:43.752 [2024-07-15 22:45:01.353146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190de8a8 00:18:43.752 [2024-07-15 22:45:01.354462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.354499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.369091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190de038 00:18:43.753 [2024-07-15 22:45:01.370380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.370416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.392058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190de038 00:18:43.753 [2024-07-15 22:45:01.394702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.394740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.408287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190de8a8 00:18:43.753 [2024-07-15 22:45:01.410846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.424423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190df118 00:18:43.753 [2024-07-15 22:45:01.426950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.426986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.440539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190df988 00:18:43.753 [2024-07-15 22:45:01.443097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.443133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.456633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e01f8 00:18:43.753 [2024-07-15 22:45:01.459079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.459116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.472494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e0a68 00:18:43.753 [2024-07-15 22:45:01.474993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.488693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e12d8 00:18:43.753 [2024-07-15 22:45:01.491117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.491153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.504852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e1b48 00:18:43.753 [2024-07-15 22:45:01.507293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.507346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.520744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e23b8 00:18:43.753 [2024-07-15 22:45:01.523141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.523195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.536794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e2c28 00:18:43.753 [2024-07-15 22:45:01.539181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.539233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.552826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e3498 00:18:43.753 [2024-07-15 22:45:01.555140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.555177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.568549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e3d08 00:18:43.753 [2024-07-15 22:45:01.570855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.753 [2024-07-15 22:45:01.570899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:43.753 [2024-07-15 22:45:01.584423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e4578 00:18:44.013 [2024-07-15 22:45:01.586705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.586741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.600247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e4de8 00:18:44.013 [2024-07-15 22:45:01.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.602532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.616018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e5658 00:18:44.013 [2024-07-15 22:45:01.618247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.631805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e5ec8 00:18:44.013 [2024-07-15 22:45:01.634011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.634045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.647579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e6738 00:18:44.013 [2024-07-15 22:45:01.649765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.649799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.663307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e6fa8 00:18:44.013 [2024-07-15 22:45:01.665482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.665532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.679176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e7818 00:18:44.013 [2024-07-15 22:45:01.681315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.681365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.694953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e8088 00:18:44.013 [2024-07-15 22:45:01.697066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.697117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.710780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e88f8 00:18:44.013 [2024-07-15 22:45:01.712888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.712946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.726534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e9168 00:18:44.013 [2024-07-15 22:45:01.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.728690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.742288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190e99d8 00:18:44.013 [2024-07-15 22:45:01.744360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.744409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.758052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ea248 00:18:44.013 [2024-07-15 22:45:01.760146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.760196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.773930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eaab8 00:18:44.013 [2024-07-15 22:45:01.775978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.776030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.789771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eb328 00:18:44.013 [2024-07-15 22:45:01.791803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.791839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.805437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ebb98 00:18:44.013 [2024-07-15 22:45:01.807508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.807562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.821283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ec408 00:18:44.013 [2024-07-15 22:45:01.823286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:44.013 [2024-07-15 22:45:01.837357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ecc78 00:18:44.013 [2024-07-15 22:45:01.839384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.013 [2024-07-15 22:45:01.839437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.853480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ed4e8 00:18:44.272 [2024-07-15 22:45:01.855468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.272 [2024-07-15 22:45:01.855519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.869467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190edd58 00:18:44.272 [2024-07-15 22:45:01.871436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.272 [2024-07-15 22:45:01.871487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.885141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ee5c8 00:18:44.272 [2024-07-15 22:45:01.887082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.272 [2024-07-15 22:45:01.887133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.900918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eee38 00:18:44.272 [2024-07-15 22:45:01.902830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.272 [2024-07-15 22:45:01.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.916356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190ef6a8 00:18:44.272 [2024-07-15 22:45:01.918216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.272 [2024-07-15 22:45:01.918266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:44.272 [2024-07-15 22:45:01.932248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190eff18 00:18:44.272 [2024-07-15 22:45:01.934118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:01.934153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:01.948276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f0788 00:18:44.273 [2024-07-15 22:45:01.950222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:01.950257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:01.964281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f0ff8 00:18:44.273 [2024-07-15 22:45:01.966112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:01.966146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:01.980276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f1868 00:18:44.273 [2024-07-15 22:45:01.982049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:01.982084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:01.996167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f20d8 00:18:44.273 [2024-07-15 22:45:01.997914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:01.997942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.011992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f2948 00:18:44.273 [2024-07-15 22:45:02.013721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.013752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.027754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f31b8 00:18:44.273 [2024-07-15 22:45:02.029475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.029505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.043569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f3a28 00:18:44.273 [2024-07-15 22:45:02.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.045395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.059452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f4298 00:18:44.273 [2024-07-15 22:45:02.061093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.061125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.075304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f4b08 00:18:44.273 [2024-07-15 22:45:02.076995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.077028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:44.273 [2024-07-15 22:45:02.091193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f5378 00:18:44.273 [2024-07-15 22:45:02.092826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.273 [2024-07-15 22:45:02.092858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.107033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f5be8 00:18:44.532 [2024-07-15 22:45:02.108675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.108707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.123039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f6458 00:18:44.532 [2024-07-15 22:45:02.124652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.124685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.138955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f6cc8 00:18:44.532 [2024-07-15 22:45:02.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.140550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.154677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f7538 00:18:44.532 [2024-07-15 22:45:02.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.156309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.170580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f7da8 00:18:44.532 [2024-07-15 22:45:02.172130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.172161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.186554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f8618 00:18:44.532 [2024-07-15 22:45:02.188107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.188153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.202654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f8e88 00:18:44.532 [2024-07-15 22:45:02.204167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.204198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.218573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f96f8 00:18:44.532 [2024-07-15 22:45:02.220078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.220108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.234428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190f9f68 00:18:44.532 [2024-07-15 22:45:02.235946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.235977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.250445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fa7d8 00:18:44.532 [2024-07-15 22:45:02.251945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.251969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.266202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fb048 00:18:44.532 [2024-07-15 22:45:02.267589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.267617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.281836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fb8b8 00:18:44.532 [2024-07-15 22:45:02.283254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.283282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.297736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fc128 00:18:44.532 [2024-07-15 22:45:02.299204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.299231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.313507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fc998 00:18:44.532 [2024-07-15 22:45:02.314891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.314932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.329358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fd208 00:18:44.532 [2024-07-15 22:45:02.330757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.330789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:44.532 [2024-07-15 22:45:02.345407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f97d0) with pdu=0x2000190fda78 00:18:44.532 [2024-07-15 22:45:02.346775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-07-15 22:45:02.346811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:44.532 00:18:44.532 Latency(us) 00:18:44.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.532 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.532 nvme0n1 : 2.01 15932.35 62.24 0.00 0.00 8025.63 4379.00 30265.72 00:18:44.532 =================================================================================================================== 00:18:44.532 Total : 15932.35 62.24 0.00 0.00 8025.63 4379.00 30265.72 00:18:44.532 0 00:18:44.791 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:44.791 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:44.791 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:44.791 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:44.791 | .driver_specific 00:18:44.791 | .nvme_error 00:18:44.791 | .status_code 00:18:44.791 | .command_transient_transport_error' 00:18:45.101 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80833 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80833 ']' 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80833 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80833 00:18:45.102 killing process with pid 80833 00:18:45.102 Received shutdown signal, test time was about 2.000000 seconds 00:18:45.102 00:18:45.102 Latency(us) 00:18:45.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.102 =================================================================================================================== 00:18:45.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80833' 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80833 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80833 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80888 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80888 /var/tmp/bperf.sock 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80888 ']' 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:45.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.102 22:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.360 [2024-07-15 22:45:02.930890] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:45.360 [2024-07-15 22:45:02.931520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:18:45.360 Zero copy mechanism will not be used. 00:18:45.360 llocations --file-prefix=spdk_pid80888 ] 00:18:45.360 [2024-07-15 22:45:03.066980] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.360 [2024-07-15 22:45:03.173281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.620 [2024-07-15 22:45:03.227492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:46.187 22:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.187 22:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:46.187 22:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.187 22:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.445 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.704 nvme0n1 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:46.963 22:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:46.963 Zero copy mechanism will not be used. 00:18:46.963 Running I/O for 2 seconds... 00:18:46.963 [2024-07-15 22:45:04.664052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.963 [2024-07-15 22:45:04.664381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.664413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.669293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.669595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.669627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.674528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.674833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.674877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.679763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.680079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.680110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.684955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.685255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.685285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.690130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.690442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.690473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.695357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.695667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.695697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.700586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.700905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.700929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.705805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.706126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.706151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.711315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.711656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.711686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.718968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.719285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.719316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.724373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.724678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.724708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.729680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.729996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.730036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.734942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.735239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.735263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.740115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.740413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.740443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.745248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.745548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.750454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.750752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.750781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.755660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.755971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.755996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.760850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.761168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.761198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.766029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.766351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.766380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.771253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.771553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.776487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.776821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.782139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.782521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.789420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.789726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.789767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.964 [2024-07-15 22:45:04.794691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:46.964 [2024-07-15 22:45:04.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.964 [2024-07-15 22:45:04.795038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.799970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.800271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.800295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.805185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.805485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.805509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.810405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.810704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.810729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.815603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.815922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.815949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.820789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.821128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.826034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.826353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.826376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.831247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.831544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.831567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.836453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.836752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.836776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.841599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.841923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.846851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.847162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.847192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.852115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.852413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.852443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.857284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.857582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.857611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.862514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.862810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.862841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.867693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.868005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.868029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.872894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.873192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.873217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.878089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.878400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.883297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.883595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.883626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.888494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.888791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.888821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.893677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.893993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.894022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.898955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.899261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.899285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.904143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.904442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.904475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.909331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.909626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.909655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.914514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.914813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.914835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.919664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.919974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.919999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.924840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.925155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.925179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.930044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.930353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.930376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.935260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.935560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.935585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.940441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.225 [2024-07-15 22:45:04.940747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.225 [2024-07-15 22:45:04.940774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.225 [2024-07-15 22:45:04.945646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.945961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.945985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.950833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.951152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.951178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.956015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.956312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.956346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.961223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.961520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.961554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.966410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.966709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.966739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.971619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.971932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.976762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.977072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.977096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.981944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.982253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.982276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.987148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.987449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.987479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.992320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.992618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.992648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:04.997505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:04.997804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:04.997835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.002695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.003003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.003034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.007916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.008211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.008240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.013095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.013395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.013426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.018324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.018622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.018663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.023593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.023903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.023940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.028749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.029058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.029087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.033949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.034255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.034284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.039099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.039399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.039430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.044347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.044643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.044675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.049516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.049812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.049843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.226 [2024-07-15 22:45:05.054721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.226 [2024-07-15 22:45:05.055033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.226 [2024-07-15 22:45:05.055061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.059909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.060207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.060231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.065059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.065359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.065388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.070269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.070569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.070597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.075465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.075793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.080685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.080992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.081015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.085800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.086113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.086145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.090965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.091262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.091293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.096138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.096435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.096465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.101362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.101661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.101687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.106568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.106877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.106907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.111737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.112052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.116941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.117269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.122099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.122405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.122429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.127339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.127640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.127670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.132544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.132880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.137729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.138042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.138065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.142936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.143234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.143259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.148070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.148370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.148402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.153291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.153594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.153617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.158504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.158800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.158831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.163723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.164074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.168925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.169220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.169250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.174080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.174390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.486 [2024-07-15 22:45:05.174427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.486 [2024-07-15 22:45:05.179311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.486 [2024-07-15 22:45:05.179606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.184545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.184842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.184884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.189709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.190025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.190055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.194931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.195228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.195258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.200154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.200453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.200483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.205278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.205584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.205615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.210586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.210886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.210927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.215845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.216163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.216193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.221060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.221350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.221380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.226349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.226644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.226674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.231566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.231908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.236737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.237047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.237078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.241953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.242259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.242284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.247232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.247527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.252487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.252796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.252819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.257709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.258047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.262939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.263235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.263265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.268164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.268452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.268480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.273394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.273697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.278530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.278829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.278852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.283747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.284062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.284087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.289105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.289400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.289423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.294302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.294600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.294634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.299438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.299734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.299763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.304675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.304991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.305014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.309832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.310143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.310166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.487 [2024-07-15 22:45:05.315034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.487 [2024-07-15 22:45:05.315333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.487 [2024-07-15 22:45:05.315362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.320162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.320462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.320493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.325317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.325611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.325634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.330469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.330772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.330794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.335629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.335943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.335973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.340795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.341101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.341130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.345966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.346294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.351089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.351385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.746 [2024-07-15 22:45:05.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.746 [2024-07-15 22:45:05.356233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.746 [2024-07-15 22:45:05.356532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.356555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.361403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.361699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.361732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.366545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.366840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.371690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.372003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.372031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.376858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.377166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.377195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.382007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.382314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.382344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.387131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.387429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.387457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.392289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.392583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.392615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.397434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.397732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.397763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.402637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.402952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.402981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.407841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.408155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.412980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.413278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.418113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.418422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.423347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.423642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.423672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.428553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.428848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.433724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.434033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.434069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.439043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.439350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.439384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.444259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.444553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.444584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.449487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.449794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.449828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.454797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.455139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.455174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.460122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.460427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.460460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.465405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.465698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.465729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.470704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.471052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.471081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.476093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.476390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.481251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.481532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.481562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.486511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.486831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.486860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.491647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.491981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.492010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.496809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.497110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.497139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.501983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.502297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.747 [2024-07-15 22:45:05.502328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.747 [2024-07-15 22:45:05.507195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.747 [2024-07-15 22:45:05.507473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.507502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.512462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.512789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.517602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.522676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.522988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.523032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.527804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.528172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.533094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.533401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.533429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.538434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.538734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.538765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.543636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.543946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.543978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.548781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.549093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.549124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.554032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.554339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.554370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.559348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.559686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.564721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.565048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.565079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.570058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.570367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.570397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:47.748 [2024-07-15 22:45:05.575313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:47.748 [2024-07-15 22:45:05.575631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.748 [2024-07-15 22:45:05.575662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.580492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.580780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.580810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.585709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.586029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.586058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.590995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.591283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.591312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.596198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.596493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.596523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.601392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.601688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.601726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.606631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.606951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.606985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.611785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.612105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.612135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.616971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.617269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.617299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.622199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.622494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.622532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.627518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.627815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.627845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.632689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.633020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.633050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.637963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.638274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.638303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.643205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.643528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.643560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.648475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.648780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.648811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.653644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.653952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.658923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.659221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.659254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.664126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.664428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.664458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.008 [2024-07-15 22:45:05.669370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.008 [2024-07-15 22:45:05.669674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.008 [2024-07-15 22:45:05.669705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.674604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.674919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.674949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.679754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.680065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.680096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.684951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.685252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.685282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.690184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.690481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.690511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.695360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.695657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.695688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.700612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.700927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.700957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.705782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.706095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.706126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.710988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.711288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.711318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.716168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.716470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.716500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.721344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.721640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.721670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.726558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.726854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.726895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.731747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.732089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.737003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.737298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.737329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.742210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.742520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.742550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.747479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.747776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.747806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.752731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.753045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.753075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.758068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.758374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.758405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.763321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.763606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.768675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.768988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.769018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.773912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.774253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.779255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.779559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.779589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.784526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.784825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.784856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.789823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.790159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.790199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.795125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.795423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.795454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.800307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.805457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.805753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.805783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.810642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.810953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.810984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.815821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.009 [2024-07-15 22:45:05.816134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.009 [2024-07-15 22:45:05.816164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.009 [2024-07-15 22:45:05.821066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.010 [2024-07-15 22:45:05.821362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.010 [2024-07-15 22:45:05.821392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.010 [2024-07-15 22:45:05.826238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.010 [2024-07-15 22:45:05.826534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.010 [2024-07-15 22:45:05.826564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.010 [2024-07-15 22:45:05.831411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.010 [2024-07-15 22:45:05.831709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.010 [2024-07-15 22:45:05.831740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.010 [2024-07-15 22:45:05.836550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.010 [2024-07-15 22:45:05.836847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.010 [2024-07-15 22:45:05.836889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.841689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.841999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.846839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.847152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.852016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.852313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.852345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.857273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.857583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.857614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.862448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.862773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.867658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.867968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.867997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.872778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.873090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.873121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.878005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.878313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.878343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.883194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.883494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.883524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.888335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.888632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.888661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.893514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.893818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.893848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.898713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.899024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.899054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.903932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.904229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.904258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.269 [2024-07-15 22:45:05.909176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.269 [2024-07-15 22:45:05.909473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.269 [2024-07-15 22:45:05.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.914428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.914726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.914759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.919635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.919977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.924815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.925127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.925158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.930017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.930366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.935287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.935616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.940586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.940881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.945856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.946169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.946208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.951242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.951541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.951571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.956567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.956873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.956915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.961979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.962283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.962314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.967477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.967762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.967792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.972792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.973100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.973132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.978023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.978332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.978365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.983323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.983665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.988682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.988993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.989024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.993941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.994274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.994305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:05.999281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:05.999579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:05.999611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.004480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.004783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.004815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.009770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.010086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.015148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.015443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.015473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.020594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.020906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.020936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.025903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.026238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.031212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.031506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.031541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.036540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.036852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.036897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.041916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.042253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.042288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.047233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.047554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.047600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.052573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.052885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.052927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.057868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.058221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.058252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.063308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.063602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.063649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.068659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.068964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.068999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.073932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.079231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.079522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.084501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.084844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.092685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.093137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.093169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.270 [2024-07-15 22:45:06.100575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.270 [2024-07-15 22:45:06.100936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.270 [2024-07-15 22:45:06.100985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.107696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.108063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.108099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.113358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.113672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.113707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.118904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.119232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.119261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.124115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.124414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.124448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.129560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.129901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.134979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.135300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.135333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.140383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.140667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.140697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.145762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.146072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.146117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.151034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.151316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.151346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.156158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.156441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.156474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.161257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.161540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.161573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.166524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.166826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.166860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.171858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.172176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.172205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.529 [2024-07-15 22:45:06.176974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.529 [2024-07-15 22:45:06.177256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.529 [2024-07-15 22:45:06.177285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.182110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.182446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.182479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.187213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.187492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.187524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.192416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.192695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.192727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.197457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.197735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.197764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.202700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.203029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.203073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.207921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.208224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.208256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.212966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.213244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.213277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.217939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.218272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.223071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.223354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.223382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.228129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.228408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.228440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.233168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.233449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.233481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.238283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.238603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.238634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.243385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.243663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.243692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.248485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.248765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.248797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.253607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.253901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.253929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.258775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.259091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.259120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.263972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.264290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.264322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.269273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.269568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.269604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.274617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.274956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.275020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.279978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.280291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.280322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.285284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.285561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.285593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.290596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.290896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.290936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.295715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.296050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.296080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.530 [2024-07-15 22:45:06.301171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.530 [2024-07-15 22:45:06.301478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.530 [2024-07-15 22:45:06.301511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.306534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.306837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.306879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.311712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.312050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.317028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.317373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.317404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.323533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.323857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.323897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.328841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.329156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.329185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.335512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.335857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.335895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.341539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.341822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.341851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.346779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.347130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.352022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.352351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.357205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.357494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.357523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.531 [2024-07-15 22:45:06.362415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.531 [2024-07-15 22:45:06.362711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.531 [2024-07-15 22:45:06.362742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.790 [2024-07-15 22:45:06.367582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.790 [2024-07-15 22:45:06.367902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.790 [2024-07-15 22:45:06.367931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.790 [2024-07-15 22:45:06.372768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.790 [2024-07-15 22:45:06.373087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.790 [2024-07-15 22:45:06.373118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.790 [2024-07-15 22:45:06.378012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.790 [2024-07-15 22:45:06.378331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.790 [2024-07-15 22:45:06.378361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.790 [2024-07-15 22:45:06.383299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.790 [2024-07-15 22:45:06.383589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.790 [2024-07-15 22:45:06.383620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.790 [2024-07-15 22:45:06.388467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.790 [2024-07-15 22:45:06.388775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.790 [2024-07-15 22:45:06.388807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.393698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.394016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.394046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.398923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.399226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.399255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.404211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.404510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.404541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.409583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.409892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.409935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.414916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.415213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.415243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.421802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.422156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.422211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.427992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.428276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.428305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.433195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.433478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.433507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.438399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.438721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.438769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.443853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.444181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.444211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.449111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.449418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.449447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.454367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.454691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.459678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.460015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.460045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.465006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.465312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.465341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.470366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.470673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.470702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.475806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.476132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.476162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.481216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.481532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.481561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.486689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.487017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.487046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.491994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.492302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.497288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.497579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.497608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.502610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.502919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.502949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.507839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.508151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.508182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.513048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.513342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.513372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.518241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.518535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.518565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.523400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.523696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.523727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.528598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.528914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.528943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.533780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.534087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.534117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.538919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.539220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.539249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.791 [2024-07-15 22:45:06.544091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.791 [2024-07-15 22:45:06.544386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.791 [2024-07-15 22:45:06.544416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.549272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.549572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.549603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.554452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.554759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.554790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.559633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.559944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.564824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.565140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.565170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.570030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.570341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.575280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.575582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.575611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.580536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.580889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.585811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.586132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.591019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.591328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.596302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.596592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.601492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.601804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.601834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.606782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.607123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.612069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.612374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.612403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.617393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.617696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.617726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:48.792 [2024-07-15 22:45:06.622696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:48.792 [2024-07-15 22:45:06.623004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.792 [2024-07-15 22:45:06.623034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.051 [2024-07-15 22:45:06.628253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:49.051 [2024-07-15 22:45:06.628544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.051 [2024-07-15 22:45:06.628574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.051 [2024-07-15 22:45:06.633674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:49.051 [2024-07-15 22:45:06.633977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.051 [2024-07-15 22:45:06.634008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.051 [2024-07-15 22:45:06.638822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:49.051 [2024-07-15 22:45:06.639149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.051 [2024-07-15 22:45:06.639179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.051 [2024-07-15 22:45:06.643960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:49.051 [2024-07-15 22:45:06.644259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.051 [2024-07-15 22:45:06.644289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.051 [2024-07-15 22:45:06.650195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9f9970) with pdu=0x2000190fef90 00:18:49.051 [2024-07-15 22:45:06.650524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.051 [2024-07-15 22:45:06.650554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.051 00:18:49.051 Latency(us) 00:18:49.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.051 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:49.051 nvme0n1 : 2.00 5842.53 730.32 0.00 0.00 2732.58 2249.08 12094.37 00:18:49.051 =================================================================================================================== 00:18:49.051 Total : 5842.53 730.32 0.00 0.00 2732.58 2249.08 12094.37 00:18:49.051 0 00:18:49.051 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:49.051 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:49.051 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:49.051 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:49.051 | .driver_specific 00:18:49.051 | .nvme_error 00:18:49.051 | .status_code 00:18:49.051 | .command_transient_transport_error' 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 377 > 0 )) 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80888 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80888 ']' 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80888 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80888 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:49.311 killing process with pid 80888 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80888' 00:18:49.311 Received shutdown signal, test time was about 2.000000 seconds 00:18:49.311 00:18:49.311 Latency(us) 00:18:49.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.311 =================================================================================================================== 00:18:49.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80888 00:18:49.311 22:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80888 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80688 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80688 ']' 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80688 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80688 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.569 killing process with pid 80688 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80688' 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80688 00:18:49.569 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80688 00:18:49.826 00:18:49.826 real 0m18.114s 00:18:49.826 user 0m35.009s 00:18:49.826 sys 0m4.796s 00:18:49.826 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.826 22:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.826 ************************************ 00:18:49.826 END TEST nvmf_digest_error 00:18:49.826 ************************************ 00:18:49.826 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:49.826 22:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.827 rmmod nvme_tcp 00:18:49.827 rmmod nvme_fabrics 00:18:49.827 rmmod nvme_keyring 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80688 ']' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80688 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80688 ']' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80688 00:18:49.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80688) - No such process 00:18:49.827 Process with pid 80688 is not found 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80688 is not found' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.827 22:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:50.085 00:18:50.085 real 0m38.078s 00:18:50.085 user 1m11.959s 00:18:50.085 sys 0m10.378s 00:18:50.085 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.085 22:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:50.085 ************************************ 00:18:50.085 END TEST nvmf_digest 00:18:50.085 ************************************ 00:18:50.085 22:45:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:50.085 22:45:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:50.085 22:45:07 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:50.085 22:45:07 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:50.085 22:45:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:50.085 22:45:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.085 22:45:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.085 ************************************ 00:18:50.085 START TEST nvmf_host_multipath 00:18:50.085 ************************************ 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:50.085 * Looking for test storage... 00:18:50.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:18:50.085 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:50.086 Cannot find device "nvmf_tgt_br" 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.086 Cannot find device "nvmf_tgt_br2" 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:50.086 Cannot find device "nvmf_tgt_br" 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:50.086 Cannot find device "nvmf_tgt_br2" 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:50.086 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.344 22:45:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.344 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:50.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:50.344 00:18:50.344 --- 10.0.0.2 ping statistics --- 00:18:50.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.344 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:50.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:50.345 00:18:50.345 --- 10.0.0.3 ping statistics --- 00:18:50.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.345 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:18:50.345 00:18:50.345 --- 10.0.0.1 ping statistics --- 00:18:50.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.345 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.345 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81157 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81157 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81157 ']' 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.602 22:45:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 [2024-07-15 22:45:08.258708] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:18:50.602 [2024-07-15 22:45:08.258821] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.602 [2024-07-15 22:45:08.403416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:50.860 [2024-07-15 22:45:08.570109] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.860 [2024-07-15 22:45:08.570201] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.860 [2024-07-15 22:45:08.570227] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.860 [2024-07-15 22:45:08.570239] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.860 [2024-07-15 22:45:08.570249] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.860 [2024-07-15 22:45:08.570472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.860 [2024-07-15 22:45:08.570498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.860 [2024-07-15 22:45:08.651447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81157 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:51.793 [2024-07-15 22:45:09.547769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.793 22:45:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:52.052 Malloc0 00:18:52.052 22:45:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:52.311 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.569 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.828 [2024-07-15 22:45:10.603634] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.828 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:53.087 [2024-07-15 22:45:10.843735] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81217 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81217 /var/tmp/bdevperf.sock 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81217 ']' 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:53.087 22:45:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:54.462 22:45:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.462 22:45:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:54.462 22:45:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:54.462 22:45:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:54.720 Nvme0n1 00:18:54.720 22:45:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:55.287 Nvme0n1 00:18:55.287 22:45:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:55.287 22:45:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:56.217 22:45:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:56.217 22:45:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:56.474 22:45:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:56.733 22:45:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:56.733 22:45:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81262 00:18:56.733 22:45:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:56.733 22:45:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.287 Attaching 4 probes... 00:19:03.287 @path[10.0.0.2, 4421]: 15808 00:19:03.287 @path[10.0.0.2, 4421]: 15848 00:19:03.287 @path[10.0.0.2, 4421]: 15870 00:19:03.287 @path[10.0.0.2, 4421]: 16125 00:19:03.287 @path[10.0.0.2, 4421]: 15740 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81262 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:03.287 22:45:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:03.287 22:45:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:03.545 22:45:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:03.545 22:45:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81376 00:19:03.545 22:45:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.545 22:45:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.099 Attaching 4 probes... 00:19:10.099 @path[10.0.0.2, 4420]: 15987 00:19:10.099 @path[10.0.0.2, 4420]: 16191 00:19:10.099 @path[10.0.0.2, 4420]: 16376 00:19:10.099 @path[10.0.0.2, 4420]: 16104 00:19:10.099 @path[10.0.0.2, 4420]: 15904 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:10.099 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81376 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:10.100 22:45:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:10.358 22:45:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:10.358 22:45:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:10.358 22:45:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81494 00:19:10.358 22:45:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.004 Attaching 4 probes... 00:19:17.004 @path[10.0.0.2, 4421]: 14426 00:19:17.004 @path[10.0.0.2, 4421]: 16803 00:19:17.004 @path[10.0.0.2, 4421]: 17108 00:19:17.004 @path[10.0.0.2, 4421]: 15969 00:19:17.004 @path[10.0.0.2, 4421]: 15904 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81494 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:17.004 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:17.262 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:17.262 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81601 00:19:17.262 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:17.262 22:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:23.821 22:45:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:23.821 22:45:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.821 Attaching 4 probes... 00:19:23.821 00:19:23.821 00:19:23.821 00:19:23.821 00:19:23.821 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81601 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:23.821 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:24.082 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:24.082 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81719 00:19:24.082 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:24.082 22:45:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:30.655 22:45:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:30.655 22:45:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.655 Attaching 4 probes... 00:19:30.655 @path[10.0.0.2, 4421]: 16585 00:19:30.655 @path[10.0.0.2, 4421]: 17300 00:19:30.655 @path[10.0.0.2, 4421]: 17257 00:19:30.655 @path[10.0.0.2, 4421]: 17243 00:19:30.655 @path[10.0.0.2, 4421]: 17122 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81719 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:30.655 22:45:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:32.029 22:45:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:32.029 22:45:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81843 00:19:32.029 22:45:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:32.029 22:45:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:38.588 Attaching 4 probes... 00:19:38.588 @path[10.0.0.2, 4420]: 17094 00:19:38.588 @path[10.0.0.2, 4420]: 17620 00:19:38.588 @path[10.0.0.2, 4420]: 17610 00:19:38.588 @path[10.0.0.2, 4420]: 17498 00:19:38.588 @path[10.0.0.2, 4420]: 17431 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81843 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:38.588 22:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:38.588 [2024-07-15 22:45:56.009355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:38.588 22:45:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:38.588 22:45:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:45.224 22:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:45.224 22:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82016 00:19:45.224 22:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:45.224 22:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81157 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:50.492 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:50.492 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.081 Attaching 4 probes... 00:19:51.081 @path[10.0.0.2, 4421]: 16948 00:19:51.081 @path[10.0.0.2, 4421]: 17287 00:19:51.081 @path[10.0.0.2, 4421]: 17256 00:19:51.081 @path[10.0.0.2, 4421]: 17330 00:19:51.081 @path[10.0.0.2, 4421]: 17292 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82016 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81217 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81217 ']' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81217 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81217 00:19:51.081 killing process with pid 81217 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81217' 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81217 00:19:51.081 22:46:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81217 00:19:51.081 Connection closed with partial response: 00:19:51.081 00:19:51.081 00:19:51.349 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81217 00:19:51.349 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:51.349 [2024-07-15 22:45:10.925250] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:19:51.349 [2024-07-15 22:45:10.925387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81217 ] 00:19:51.349 [2024-07-15 22:45:11.064495] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.349 [2024-07-15 22:45:11.184673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.349 [2024-07-15 22:45:11.242967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:51.349 Running I/O for 90 seconds... 00:19:51.349 [2024-07-15 22:45:21.307548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.307981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.307996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.349 [2024-07-15 22:45:21.308265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.349 [2024-07-15 22:45:21.308299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.349 [2024-07-15 22:45:21.308333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.349 [2024-07-15 22:45:21.308367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:51.349 [2024-07-15 22:45:21.308387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.349 [2024-07-15 22:45:21.308400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.308828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.308901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.308938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.308959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.308972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.309427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.309939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.309977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.350 [2024-07-15 22:45:21.310355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.350 [2024-07-15 22:45:21.310882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:51.350 [2024-07-15 22:45:21.310902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.310927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.310950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.310964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.311984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.311998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.312018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.312031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.312051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.312065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:21.313540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:21.313835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:21.313854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.862972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.862988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.863391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.863975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.863997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.864012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.864048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.351 [2024-07-15 22:45:27.864340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.864375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.864410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:51.351 [2024-07-15 22:45:27.864439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.351 [2024-07-15 22:45:27.864455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.864936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.864972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.864993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.865518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.865984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.865998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.866535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.866978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.866993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.867028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.867064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.867100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.352 [2024-07-15 22:45:27.867888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.867952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.867997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:27.868659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:27.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:34.943173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:34.943246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:34.943320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:34.943340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:34.943362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.352 [2024-07-15 22:45:34.943377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.352 [2024-07-15 22:45:34.943397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.943836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.943895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.943949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.943970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.944746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.944978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.944998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.945350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.945965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.946347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.946982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.946996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.353 [2024-07-15 22:45:34.947034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.947078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.947116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.947154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.947192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.353 [2024-07-15 22:45:34.947230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:51.353 [2024-07-15 22:45:34.947254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.947306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.947344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.947971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:34.947984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:34.948613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:34.948627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.407991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.408325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.408979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.408993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.354 [2024-07-15 22:45:48.409499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.354 [2024-07-15 22:45:48.409977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.354 [2024-07-15 22:45:48.409992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.410851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.410901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.410930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.410958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.410973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.410986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.355 [2024-07-15 22:45:48.411434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.411640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ea10 is same with the state(5) to be set 00:19:51.355 [2024-07-15 22:45:48.411672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100000 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.411950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.411969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.411982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.411992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.412002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.412016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.412045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.412055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.412068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.355 [2024-07-15 22:45:48.412091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.355 [2024-07-15 22:45:48.412101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:19:51.355 [2024-07-15 22:45:48.412114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412174] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x229ea10 was disconnected and freed. reset controller. 00:19:51.355 [2024-07-15 22:45:48.412308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.355 [2024-07-15 22:45:48.412334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.355 [2024-07-15 22:45:48.412363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.355 [2024-07-15 22:45:48.412390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.355 [2024-07-15 22:45:48.412428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.355 [2024-07-15 22:45:48.412456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.355 [2024-07-15 22:45:48.412476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f2a0 is same with the state(5) to be set 00:19:51.355 [2024-07-15 22:45:48.413629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.355 [2024-07-15 22:45:48.413668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f2a0 (9): Bad file descriptor 00:19:51.355 [2024-07-15 22:45:48.414093] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.355 [2024-07-15 22:45:48.414125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221f2a0 with addr=10.0.0.2, port=4421 00:19:51.355 [2024-07-15 22:45:48.414143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f2a0 is same with the state(5) to be set 00:19:51.355 [2024-07-15 22:45:48.414222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f2a0 (9): Bad file descriptor 00:19:51.355 [2024-07-15 22:45:48.414271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.355 [2024-07-15 22:45:48.414288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.355 [2024-07-15 22:45:48.414320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.355 [2024-07-15 22:45:48.414355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.355 [2024-07-15 22:45:48.414372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.355 [2024-07-15 22:45:58.486817] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.355 Received shutdown signal, test time was about 55.759505 seconds 00:19:51.355 00:19:51.355 Latency(us) 00:19:51.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.355 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.355 Verification LBA range: start 0x0 length 0x4000 00:19:51.355 Nvme0n1 : 55.76 7207.95 28.16 0.00 0.00 17731.64 1206.46 7046430.72 00:19:51.355 =================================================================================================================== 00:19:51.355 Total : 7207.95 28.16 0.00 0.00 17731.64 1206.46 7046430.72 00:19:51.355 22:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.355 22:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:51.355 22:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.613 rmmod nvme_tcp 00:19:51.613 rmmod nvme_fabrics 00:19:51.613 rmmod nvme_keyring 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81157 ']' 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81157 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81157 ']' 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81157 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81157 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.613 killing process with pid 81157 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81157' 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81157 00:19:51.613 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81157 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.871 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.872 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.872 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.872 22:46:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.872 ************************************ 00:19:51.872 END TEST nvmf_host_multipath 00:19:51.872 ************************************ 00:19:51.872 00:19:51.872 real 1m1.873s 00:19:51.872 user 2m52.239s 00:19:51.872 sys 0m18.325s 00:19:51.872 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.872 22:46:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:51.872 22:46:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.872 22:46:09 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:51.872 22:46:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:51.872 22:46:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.872 22:46:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.872 ************************************ 00:19:51.872 START TEST nvmf_timeout 00:19:51.872 ************************************ 00:19:51.872 22:46:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:52.130 * Looking for test storage... 00:19:52.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.130 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:52.131 Cannot find device "nvmf_tgt_br" 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.131 Cannot find device "nvmf_tgt_br2" 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:52.131 Cannot find device "nvmf_tgt_br" 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:52.131 Cannot find device "nvmf_tgt_br2" 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.131 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.389 22:46:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:52.390 00:19:52.390 --- 10.0.0.2 ping statistics --- 00:19:52.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.390 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:52.390 00:19:52.390 --- 10.0.0.3 ping statistics --- 00:19:52.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.390 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:52.390 00:19:52.390 --- 10.0.0.1 ping statistics --- 00:19:52.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.390 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82324 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82324 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82324 ']' 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.390 22:46:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:52.390 [2024-07-15 22:46:10.175702] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:19:52.390 [2024-07-15 22:46:10.175780] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.648 [2024-07-15 22:46:10.312822] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:52.648 [2024-07-15 22:46:10.412883] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.648 [2024-07-15 22:46:10.413177] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.648 [2024-07-15 22:46:10.413339] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.648 [2024-07-15 22:46:10.413391] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.648 [2024-07-15 22:46:10.413420] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.648 [2024-07-15 22:46:10.413662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.648 [2024-07-15 22:46:10.413671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.648 [2024-07-15 22:46:10.468158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.581 22:46:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:53.838 [2024-07-15 22:46:11.461041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.838 22:46:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:54.095 Malloc0 00:19:54.095 22:46:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:54.353 22:46:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:54.612 22:46:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.868 [2024-07-15 22:46:12.541900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82373 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82373 /var/tmp/bdevperf.sock 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82373 ']' 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.868 22:46:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 [2024-07-15 22:46:12.609083] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:19:54.868 [2024-07-15 22:46:12.609175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82373 ] 00:19:55.164 [2024-07-15 22:46:12.741255] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.164 [2024-07-15 22:46:12.849284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.164 [2024-07-15 22:46:12.903277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:56.095 22:46:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.095 22:46:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:56.095 22:46:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:56.095 22:46:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:56.353 NVMe0n1 00:19:56.611 22:46:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82397 00:19:56.611 22:46:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.611 22:46:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:56.611 Running I/O for 10 seconds... 00:19:57.543 22:46:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.803 [2024-07-15 22:46:15.456245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.803 [2024-07-15 22:46:15.456321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.803 [2024-07-15 22:46:15.456336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.803 [2024-07-15 22:46:15.456345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.803 [2024-07-15 22:46:15.456355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.803 [2024-07-15 22:46:15.456365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.456999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.804 [2024-07-15 22:46:15.457237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:57.805 [2024-07-15 22:46:15.457408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.457980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.457996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.805 [2024-07-15 22:46:15.458273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.805 [2024-07-15 22:46:15.458284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.458983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.806 [2024-07-15 22:46:15.459222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.806 [2024-07-15 22:46:15.459233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.807 [2024-07-15 22:46:15.459901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.459926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.459947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.459968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.459988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.807 [2024-07-15 22:46:15.460117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.807 [2024-07-15 22:46:15.460126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.808 [2024-07-15 22:46:15.460146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.808 [2024-07-15 22:46:15.460171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.808 [2024-07-15 22:46:15.460191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.808 [2024-07-15 22:46:15.460212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.808 [2024-07-15 22:46:15.460232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21628d0 is same with the state(5) to be set 00:19:57.808 [2024-07-15 22:46:15.460258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.808 [2024-07-15 22:46:15.460266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.808 [2024-07-15 22:46:15.460274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:19:57.808 [2024-07-15 22:46:15.460283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.808 [2024-07-15 22:46:15.460336] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21628d0 was disconnected and freed. reset controller. 00:19:57.808 [2024-07-15 22:46:15.460602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.808 [2024-07-15 22:46:15.460678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111ee0 (9): Bad file descriptor 00:19:57.808 [2024-07-15 22:46:15.460781] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.808 [2024-07-15 22:46:15.460802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2111ee0 with addr=10.0.0.2, port=4420 00:19:57.808 [2024-07-15 22:46:15.460813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111ee0 is same with the state(5) to be set 00:19:57.808 [2024-07-15 22:46:15.460831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111ee0 (9): Bad file descriptor 00:19:57.808 [2024-07-15 22:46:15.460846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.808 [2024-07-15 22:46:15.460855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.808 [2024-07-15 22:46:15.460882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.808 [2024-07-15 22:46:15.460909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.808 [2024-07-15 22:46:15.460920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.808 22:46:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:59.708 [2024-07-15 22:46:17.461218] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.708 [2024-07-15 22:46:17.461297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2111ee0 with addr=10.0.0.2, port=4420 00:19:59.708 [2024-07-15 22:46:17.461314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111ee0 is same with the state(5) to be set 00:19:59.708 [2024-07-15 22:46:17.461341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111ee0 (9): Bad file descriptor 00:19:59.708 [2024-07-15 22:46:17.461373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.708 [2024-07-15 22:46:17.461390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:59.708 [2024-07-15 22:46:17.461404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.708 [2024-07-15 22:46:17.461432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.708 [2024-07-15 22:46:17.461444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.708 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:59.708 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.708 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:59.981 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:59.981 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:59.981 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:59.981 22:46:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:00.239 22:46:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:00.239 22:46:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:02.143 [2024-07-15 22:46:19.461671] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:02.143 [2024-07-15 22:46:19.461743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2111ee0 with addr=10.0.0.2, port=4420 00:20:02.143 [2024-07-15 22:46:19.461760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111ee0 is same with the state(5) to be set 00:20:02.143 [2024-07-15 22:46:19.461787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111ee0 (9): Bad file descriptor 00:20:02.143 [2024-07-15 22:46:19.461807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.143 [2024-07-15 22:46:19.461817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:02.143 [2024-07-15 22:46:19.461828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.143 [2024-07-15 22:46:19.461856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:02.143 [2024-07-15 22:46:19.461881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.048 [2024-07-15 22:46:21.462046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.048 [2024-07-15 22:46:21.462108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.048 [2024-07-15 22:46:21.462120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.048 [2024-07-15 22:46:21.462131] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:04.048 [2024-07-15 22:46:21.462170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.993 00:20:04.993 Latency(us) 00:20:04.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.993 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.993 Verification LBA range: start 0x0 length 0x4000 00:20:04.993 NVMe0n1 : 8.16 993.10 3.88 15.69 0.00 126765.07 3768.32 7046430.72 00:20:04.993 =================================================================================================================== 00:20:04.993 Total : 993.10 3.88 15.69 0.00 126765.07 3768.32 7046430.72 00:20:04.993 0 00:20:05.302 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:05.302 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:05.302 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.561 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:05.562 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:05.562 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:05.562 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82397 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82373 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82373 ']' 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82373 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:05.821 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82373 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:05.822 killing process with pid 82373 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82373' 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82373 00:20:05.822 Received shutdown signal, test time was about 9.288841 seconds 00:20:05.822 00:20:05.822 Latency(us) 00:20:05.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.822 =================================================================================================================== 00:20:05.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.822 22:46:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82373 00:20:06.079 22:46:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.337 [2024-07-15 22:46:24.030269] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82513 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82513 /var/tmp/bdevperf.sock 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82513 ']' 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.337 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.338 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.338 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.338 22:46:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:06.338 [2024-07-15 22:46:24.108510] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:20:06.338 [2024-07-15 22:46:24.108607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82513 ] 00:20:06.596 [2024-07-15 22:46:24.253161] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.596 [2024-07-15 22:46:24.398720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.854 [2024-07-15 22:46:24.469083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.420 22:46:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.420 22:46:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:07.420 22:46:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:07.678 22:46:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:08.243 NVMe0n1 00:20:08.243 22:46:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82542 00:20:08.243 22:46:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.243 22:46:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:08.243 Running I/O for 10 seconds... 00:20:09.178 22:46:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.440 [2024-07-15 22:46:27.102394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.102999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.103089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203e620 is same with the state(5) to be set 00:20:09.440 [2024-07-15 22:46:27.104417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.440 [2024-07-15 22:46:27.104608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.440 [2024-07-15 22:46:27.104618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.104979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.441 [2024-07-15 22:46:27.105554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.441 [2024-07-15 22:46:27.105565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.442 [2024-07-15 22:46:27.105935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.105981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.442 [2024-07-15 22:46:27.106557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.442 [2024-07-15 22:46:27.106567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.443 [2024-07-15 22:46:27.106576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.443 [2024-07-15 22:46:27.106595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.443 [2024-07-15 22:46:27.106614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd098d0 is same with the state(5) to be set 00:20:09.443 [2024-07-15 22:46:27.106648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58128 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58360 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58368 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58376 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58384 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58392 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58400 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.106957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58408 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.106971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.106986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.106993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58416 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58424 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58432 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58440 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58448 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58472 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58480 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58512 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.107415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.107422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.126717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.126757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.443 [2024-07-15 22:46:27.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:20:09.443 [2024-07-15 22:46:27.126787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.443 [2024-07-15 22:46:27.126801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.443 [2024-07-15 22:46:27.126812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.126823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.126838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.126853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.126863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.126892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.126906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.126919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.126930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.126941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58552 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.126954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.126967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.126978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.126989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58568 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58576 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58584 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58592 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58600 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.444 [2024-07-15 22:46:27.127262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.444 [2024-07-15 22:46:27.127273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58608 len:8 PRP1 0x0 PRP2 0x0 00:20:09.444 [2024-07-15 22:46:27.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127371] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd098d0 was disconnected and freed. reset controller. 00:20:09.444 22:46:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:09.444 [2024-07-15 22:46:27.127553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.444 [2024-07-15 22:46:27.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.444 [2024-07-15 22:46:27.127605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.444 [2024-07-15 22:46:27.127632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.444 [2024-07-15 22:46:27.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.444 [2024-07-15 22:46:27.127670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:09.444 [2024-07-15 22:46:27.128013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.444 [2024-07-15 22:46:27.128044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:09.444 [2024-07-15 22:46:27.128174] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.444 [2024-07-15 22:46:27.128202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:09.444 [2024-07-15 22:46:27.128216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:09.444 [2024-07-15 22:46:27.128242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:09.444 [2024-07-15 22:46:27.128263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.444 [2024-07-15 22:46:27.128276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.444 [2024-07-15 22:46:27.128291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.444 [2024-07-15 22:46:27.128316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.444 [2024-07-15 22:46:27.128341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.379 [2024-07-15 22:46:28.128505] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.379 [2024-07-15 22:46:28.128570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:10.379 [2024-07-15 22:46:28.128587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:10.379 [2024-07-15 22:46:28.128612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:10.379 [2024-07-15 22:46:28.128631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.379 [2024-07-15 22:46:28.128644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:10.379 [2024-07-15 22:46:28.128662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.379 [2024-07-15 22:46:28.128689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.379 [2024-07-15 22:46:28.128702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.379 22:46:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.635 [2024-07-15 22:46:28.421383] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.635 22:46:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82542 00:20:11.567 [2024-07-15 22:46:29.144726] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.677 00:20:19.677 Latency(us) 00:20:19.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.677 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.677 Verification LBA range: start 0x0 length 0x4000 00:20:19.677 NVMe0n1 : 10.01 4938.88 19.29 0.00 0.00 25877.91 2115.03 3050402.91 00:20:19.677 =================================================================================================================== 00:20:19.677 Total : 4938.88 19.29 0.00 0.00 25877.91 2115.03 3050402.91 00:20:19.677 0 00:20:19.677 22:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82648 00:20:19.677 22:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.677 22:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:19.677 Running I/O for 10 seconds... 00:20:19.677 22:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.677 [2024-07-15 22:46:37.288497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.288821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.288984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.288995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.677 [2024-07-15 22:46:37.289205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.677 [2024-07-15 22:46:37.289344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.677 [2024-07-15 22:46:37.289355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.289667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.678 [2024-07-15 22:46:37.289981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.289992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.678 [2024-07-15 22:46:37.290284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.678 [2024-07-15 22:46:37.290296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.290682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.290985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.290994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.291014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.679 [2024-07-15 22:46:37.291045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.679 [2024-07-15 22:46:37.291167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.679 [2024-07-15 22:46:37.291178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.680 [2024-07-15 22:46:37.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.680 [2024-07-15 22:46:37.291208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.680 [2024-07-15 22:46:37.291350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39380 is same with the state(5) to be set 00:20:19.680 [2024-07-15 22:46:37.291377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.680 [2024-07-15 22:46:37.291385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.680 [2024-07-15 22:46:37.291393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73056 len:8 PRP1 0x0 PRP2 0x0 00:20:19.680 [2024-07-15 22:46:37.291402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.680 [2024-07-15 22:46:37.291470] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd39380 was disconnected and freed. reset controller. 00:20:19.680 [2024-07-15 22:46:37.291714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.680 [2024-07-15 22:46:37.291813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:19.680 [2024-07-15 22:46:37.291942] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.680 [2024-07-15 22:46:37.291962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:19.680 [2024-07-15 22:46:37.291973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:19.680 [2024-07-15 22:46:37.291991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:19.680 [2024-07-15 22:46:37.292015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.680 [2024-07-15 22:46:37.292024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.680 [2024-07-15 22:46:37.292034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.680 [2024-07-15 22:46:37.292060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.680 [2024-07-15 22:46:37.292072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.680 22:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:20.616 [2024-07-15 22:46:38.292276] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.616 [2024-07-15 22:46:38.292349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:20.616 [2024-07-15 22:46:38.292365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:20.616 [2024-07-15 22:46:38.292390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:20.616 [2024-07-15 22:46:38.292409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.616 [2024-07-15 22:46:38.292418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.616 [2024-07-15 22:46:38.292430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.616 [2024-07-15 22:46:38.292472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.616 [2024-07-15 22:46:38.292484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.553 [2024-07-15 22:46:39.292647] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.553 [2024-07-15 22:46:39.292720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:21.553 [2024-07-15 22:46:39.292736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:21.553 [2024-07-15 22:46:39.292763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:21.553 [2024-07-15 22:46:39.292782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.553 [2024-07-15 22:46:39.292792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.553 [2024-07-15 22:46:39.292802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.553 [2024-07-15 22:46:39.292829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.553 [2024-07-15 22:46:39.292854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.523 [2024-07-15 22:46:40.296465] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.523 [2024-07-15 22:46:40.296540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8ee0 with addr=10.0.0.2, port=4420 00:20:22.523 [2024-07-15 22:46:40.296557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8ee0 is same with the state(5) to be set 00:20:22.523 [2024-07-15 22:46:40.296809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb8ee0 (9): Bad file descriptor 00:20:22.523 [2024-07-15 22:46:40.297068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.523 [2024-07-15 22:46:40.297091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.523 [2024-07-15 22:46:40.297104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.523 [2024-07-15 22:46:40.300976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.523 [2024-07-15 22:46:40.301005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.523 22:46:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.781 [2024-07-15 22:46:40.582433] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.781 22:46:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82648 00:20:23.713 [2024-07-15 22:46:41.335453] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:28.981 00:20:28.981 Latency(us) 00:20:28.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.981 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:28.981 Verification LBA range: start 0x0 length 0x4000 00:20:28.981 NVMe0n1 : 10.01 5521.50 21.57 3739.95 0.00 13793.77 677.70 3019898.88 00:20:28.981 =================================================================================================================== 00:20:28.981 Total : 5521.50 21.57 3739.95 0.00 13793.77 0.00 3019898.88 00:20:28.981 0 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82513 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82513 ']' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82513 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82513 00:20:28.981 killing process with pid 82513 00:20:28.981 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.981 00:20:28.981 Latency(us) 00:20:28.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.981 =================================================================================================================== 00:20:28.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82513' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82513 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82513 00:20:28.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82757 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82757 /var/tmp/bdevperf.sock 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82757 ']' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.981 22:46:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:28.981 [2024-07-15 22:46:46.461479] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:20:28.982 [2024-07-15 22:46:46.461561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82757 ] 00:20:28.982 [2024-07-15 22:46:46.595021] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.982 [2024-07-15 22:46:46.706770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.982 [2024-07-15 22:46:46.762205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:29.915 22:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.915 22:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:29.915 22:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82773 00:20:29.915 22:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82757 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:29.915 22:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:30.173 22:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:30.432 NVMe0n1 00:20:30.432 22:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82815 00:20:30.432 22:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.432 22:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:30.432 Running I/O for 10 seconds... 00:20:31.373 22:46:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.635 [2024-07-15 22:46:49.329093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.635 [2024-07-15 22:46:49.329957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.329968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.329977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.329986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.329995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041b80 is same with the state(5) to be set 00:20:31.636 [2024-07-15 22:46:49.330419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.636 [2024-07-15 22:46:49.330854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.636 [2024-07-15 22:46:49.330865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.330887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.330912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.330924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.330936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.330945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.330956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.330965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.330977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.330986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.330998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.637 [2024-07-15 22:46:49.331783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.637 [2024-07-15 22:46:49.331794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.331979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.331991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.638 [2024-07-15 22:46:49.332697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.638 [2024-07-15 22:46:49.332708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.332997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.639 [2024-07-15 22:46:49.333215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a570 is same with the state(5) to be set 00:20:31.639 [2024-07-15 22:46:49.333249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.639 [2024-07-15 22:46:49.333257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.639 [2024-07-15 22:46:49.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60528 len:8 PRP1 0x0 PRP2 0x0 00:20:31.639 [2024-07-15 22:46:49.333274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.639 [2024-07-15 22:46:49.333328] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf8a570 was disconnected and freed. reset controller. 00:20:31.639 [2024-07-15 22:46:49.333639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.639 [2024-07-15 22:46:49.333727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39da0 (9): Bad file descriptor 00:20:31.639 [2024-07-15 22:46:49.333840] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:31.639 [2024-07-15 22:46:49.333861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf39da0 with addr=10.0.0.2, port=4420 00:20:31.639 [2024-07-15 22:46:49.333889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39da0 is same with the state(5) to be set 00:20:31.639 [2024-07-15 22:46:49.333910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39da0 (9): Bad file descriptor 00:20:31.639 [2024-07-15 22:46:49.333929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:31.639 [2024-07-15 22:46:49.333938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:31.639 [2024-07-15 22:46:49.333950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.639 [2024-07-15 22:46:49.333970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:31.639 [2024-07-15 22:46:49.333980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.639 22:46:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82815 00:20:33.538 [2024-07-15 22:46:51.334220] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.538 [2024-07-15 22:46:51.334292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf39da0 with addr=10.0.0.2, port=4420 00:20:33.538 [2024-07-15 22:46:51.334310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39da0 is same with the state(5) to be set 00:20:33.538 [2024-07-15 22:46:51.334337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39da0 (9): Bad file descriptor 00:20:33.538 [2024-07-15 22:46:51.334358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:33.538 [2024-07-15 22:46:51.334369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:33.538 [2024-07-15 22:46:51.334381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:33.538 [2024-07-15 22:46:51.334407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:33.538 [2024-07-15 22:46:51.334420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:36.077 [2024-07-15 22:46:53.334691] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.077 [2024-07-15 22:46:53.334761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf39da0 with addr=10.0.0.2, port=4420 00:20:36.077 [2024-07-15 22:46:53.334779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39da0 is same with the state(5) to be set 00:20:36.077 [2024-07-15 22:46:53.334806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39da0 (9): Bad file descriptor 00:20:36.077 [2024-07-15 22:46:53.334827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.077 [2024-07-15 22:46:53.334837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:36.077 [2024-07-15 22:46:53.334849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.077 [2024-07-15 22:46:53.334889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.077 [2024-07-15 22:46:53.334903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.979 [2024-07-15 22:46:55.335040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.979 [2024-07-15 22:46:55.335108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.979 [2024-07-15 22:46:55.335121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.979 [2024-07-15 22:46:55.335132] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:37.979 [2024-07-15 22:46:55.335158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.561 00:20:38.561 Latency(us) 00:20:38.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.561 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:38.561 NVMe0n1 : 8.16 2040.91 7.97 15.69 0.00 62129.17 1578.82 7015926.69 00:20:38.561 =================================================================================================================== 00:20:38.561 Total : 2040.91 7.97 15.69 0.00 62129.17 1578.82 7015926.69 00:20:38.561 0 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:38.561 Attaching 5 probes... 00:20:38.561 1298.843677: reset bdev controller NVMe0 00:20:38.561 1299.001660: reconnect bdev controller NVMe0 00:20:38.561 3299.295383: reconnect delay bdev controller NVMe0 00:20:38.561 3299.318979: reconnect bdev controller NVMe0 00:20:38.561 5299.767813: reconnect delay bdev controller NVMe0 00:20:38.561 5299.791884: reconnect bdev controller NVMe0 00:20:38.561 7300.221017: reconnect delay bdev controller NVMe0 00:20:38.561 7300.264599: reconnect bdev controller NVMe0 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82773 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82757 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82757 ']' 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82757 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82757 00:20:38.561 killing process with pid 82757 00:20:38.561 Received shutdown signal, test time was about 8.214331 seconds 00:20:38.561 00:20:38.561 Latency(us) 00:20:38.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.561 =================================================================================================================== 00:20:38.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82757' 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82757 00:20:38.561 22:46:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82757 00:20:38.820 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.079 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:39.079 22:46:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:39.079 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.079 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.337 rmmod nvme_tcp 00:20:39.337 rmmod nvme_fabrics 00:20:39.337 rmmod nvme_keyring 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82324 ']' 00:20:39.337 22:46:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82324 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82324 ']' 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82324 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82324 00:20:39.337 killing process with pid 82324 00:20:39.337 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.338 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.338 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82324' 00:20:39.338 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82324 00:20:39.338 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82324 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:39.595 00:20:39.595 real 0m47.665s 00:20:39.595 user 2m20.197s 00:20:39.595 sys 0m5.882s 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.595 22:46:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:39.595 ************************************ 00:20:39.595 END TEST nvmf_timeout 00:20:39.595 ************************************ 00:20:39.595 22:46:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.595 22:46:57 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:39.595 22:46:57 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:39.595 22:46:57 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.595 22:46:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.595 22:46:57 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:39.595 00:20:39.595 real 12m31.201s 00:20:39.595 user 30m34.334s 00:20:39.595 sys 3m7.675s 00:20:39.595 22:46:57 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.595 22:46:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.595 ************************************ 00:20:39.595 END TEST nvmf_tcp 00:20:39.595 ************************************ 00:20:39.853 22:46:57 -- common/autotest_common.sh@1142 -- # return 0 00:20:39.853 22:46:57 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:39.853 22:46:57 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:39.853 22:46:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:39.853 22:46:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.853 22:46:57 -- common/autotest_common.sh@10 -- # set +x 00:20:39.853 ************************************ 00:20:39.853 START TEST nvmf_dif 00:20:39.853 ************************************ 00:20:39.853 22:46:57 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:39.853 * Looking for test storage... 00:20:39.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:39.853 22:46:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.853 22:46:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.854 22:46:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.854 22:46:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.854 22:46:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.854 22:46:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.854 22:46:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.854 22:46:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.854 22:46:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:39.854 22:46:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.854 22:46:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:39.854 22:46:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:39.854 22:46:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:39.854 22:46:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:39.854 22:46:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.854 22:46:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:39.854 22:46:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:39.854 Cannot find device "nvmf_tgt_br" 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.854 Cannot find device "nvmf_tgt_br2" 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:39.854 Cannot find device "nvmf_tgt_br" 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:39.854 Cannot find device "nvmf_tgt_br2" 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:39.854 22:46:57 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:40.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:40.112 00:20:40.112 --- 10.0.0.2 ping statistics --- 00:20:40.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.112 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:40.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:40.112 00:20:40.112 --- 10.0.0.3 ping statistics --- 00:20:40.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.112 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:40.112 22:46:57 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:40.112 00:20:40.113 --- 10.0.0.1 ping statistics --- 00:20:40.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.113 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:40.113 22:46:57 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.113 22:46:57 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:40.113 22:46:57 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:40.113 22:46:57 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:40.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.629 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.629 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.629 22:46:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.629 22:46:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.629 22:46:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.630 22:46:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:40.630 22:46:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83246 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:40.630 22:46:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83246 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83246 ']' 00:20:40.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.630 22:46:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:40.630 [2024-07-15 22:46:58.332661] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:20:40.630 [2024-07-15 22:46:58.332748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.888 [2024-07-15 22:46:58.470385] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.888 [2024-07-15 22:46:58.590680] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.888 [2024-07-15 22:46:58.590757] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.888 [2024-07-15 22:46:58.590772] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.888 [2024-07-15 22:46:58.590783] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.888 [2024-07-15 22:46:58.590792] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.888 [2024-07-15 22:46:58.590821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.888 [2024-07-15 22:46:58.650196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:41.823 22:46:59 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 22:46:59 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.823 22:46:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:41.823 22:46:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 [2024-07-15 22:46:59.399046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.823 22:46:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 ************************************ 00:20:41.823 START TEST fio_dif_1_default 00:20:41.823 ************************************ 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 bdev_null0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:41.823 [2024-07-15 22:46:59.443130] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.823 { 00:20:41.823 "params": { 00:20:41.823 "name": "Nvme$subsystem", 00:20:41.823 "trtype": "$TEST_TRANSPORT", 00:20:41.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.823 "adrfam": "ipv4", 00:20:41.823 "trsvcid": "$NVMF_PORT", 00:20:41.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.823 "hdgst": ${hdgst:-false}, 00:20:41.823 "ddgst": ${ddgst:-false} 00:20:41.823 }, 00:20:41.823 "method": "bdev_nvme_attach_controller" 00:20:41.823 } 00:20:41.823 EOF 00:20:41.823 )") 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.823 "params": { 00:20:41.823 "name": "Nvme0", 00:20:41.823 "trtype": "tcp", 00:20:41.823 "traddr": "10.0.0.2", 00:20:41.823 "adrfam": "ipv4", 00:20:41.823 "trsvcid": "4420", 00:20:41.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.823 "hdgst": false, 00:20:41.823 "ddgst": false 00:20:41.823 }, 00:20:41.823 "method": "bdev_nvme_attach_controller" 00:20:41.823 }' 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.823 22:46:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.823 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:41.823 fio-3.35 00:20:41.823 Starting 1 thread 00:20:54.065 00:20:54.065 filename0: (groupid=0, jobs=1): err= 0: pid=83318: Mon Jul 15 22:47:10 2024 00:20:54.065 read: IOPS=8613, BW=33.6MiB/s (35.3MB/s)(336MiB/10001msec) 00:20:54.065 slat (usec): min=6, max=242, avg= 8.94, stdev= 3.63 00:20:54.065 clat (usec): min=367, max=3003, avg=437.95, stdev=35.31 00:20:54.065 lat (usec): min=374, max=3049, avg=446.90, stdev=35.96 00:20:54.065 clat percentiles (usec): 00:20:54.065 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:20:54.065 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:20:54.065 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 461], 95.00th=[ 474], 00:20:54.065 | 99.00th=[ 523], 99.50th=[ 586], 99.90th=[ 693], 99.95th=[ 775], 00:20:54.065 | 99.99th=[ 1401] 00:20:54.065 bw ( KiB/s): min=31808, max=35008, per=100.00%, avg=34474.11, stdev=717.77, samples=19 00:20:54.065 iops : min= 7952, max= 8752, avg=8618.53, stdev=179.44, samples=19 00:20:54.065 lat (usec) : 500=98.32%, 750=1.62%, 1000=0.04% 00:20:54.065 lat (msec) : 2=0.02%, 4=0.01% 00:20:54.065 cpu : usr=84.39%, sys=13.35%, ctx=139, majf=0, minf=0 00:20:54.065 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.065 issued rwts: total=86140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.065 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:54.065 00:20:54.065 Run status group 0 (all jobs): 00:20:54.065 READ: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=336MiB (353MB), run=10001-10001msec 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 ************************************ 00:20:54.065 END TEST fio_dif_1_default 00:20:54.065 ************************************ 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 00:20:54.065 real 0m11.000s 00:20:54.065 user 0m9.066s 00:20:54.065 sys 0m1.592s 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:54.065 22:47:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:54.065 22:47:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:54.065 22:47:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 ************************************ 00:20:54.065 START TEST fio_dif_1_multi_subsystems 00:20:54.065 ************************************ 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 bdev_null0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 [2024-07-15 22:47:10.499919] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 bdev_null1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:54.065 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.066 { 00:20:54.066 "params": { 00:20:54.066 "name": "Nvme$subsystem", 00:20:54.066 "trtype": "$TEST_TRANSPORT", 00:20:54.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.066 "adrfam": "ipv4", 00:20:54.066 "trsvcid": "$NVMF_PORT", 00:20:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.066 "hdgst": ${hdgst:-false}, 00:20:54.066 "ddgst": ${ddgst:-false} 00:20:54.066 }, 00:20:54.066 "method": "bdev_nvme_attach_controller" 00:20:54.066 } 00:20:54.066 EOF 00:20:54.066 )") 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.066 { 00:20:54.066 "params": { 00:20:54.066 "name": "Nvme$subsystem", 00:20:54.066 "trtype": "$TEST_TRANSPORT", 00:20:54.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.066 "adrfam": "ipv4", 00:20:54.066 "trsvcid": "$NVMF_PORT", 00:20:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.066 "hdgst": ${hdgst:-false}, 00:20:54.066 "ddgst": ${ddgst:-false} 00:20:54.066 }, 00:20:54.066 "method": "bdev_nvme_attach_controller" 00:20:54.066 } 00:20:54.066 EOF 00:20:54.066 )") 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:54.066 "params": { 00:20:54.066 "name": "Nvme0", 00:20:54.066 "trtype": "tcp", 00:20:54.066 "traddr": "10.0.0.2", 00:20:54.066 "adrfam": "ipv4", 00:20:54.066 "trsvcid": "4420", 00:20:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:54.066 "hdgst": false, 00:20:54.066 "ddgst": false 00:20:54.066 }, 00:20:54.066 "method": "bdev_nvme_attach_controller" 00:20:54.066 },{ 00:20:54.066 "params": { 00:20:54.066 "name": "Nvme1", 00:20:54.066 "trtype": "tcp", 00:20:54.066 "traddr": "10.0.0.2", 00:20:54.066 "adrfam": "ipv4", 00:20:54.066 "trsvcid": "4420", 00:20:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.066 "hdgst": false, 00:20:54.066 "ddgst": false 00:20:54.066 }, 00:20:54.066 "method": "bdev_nvme_attach_controller" 00:20:54.066 }' 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.066 22:47:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.066 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:54.066 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:54.066 fio-3.35 00:20:54.066 Starting 2 threads 00:21:04.035 00:21:04.035 filename0: (groupid=0, jobs=1): err= 0: pid=83477: Mon Jul 15 22:47:21 2024 00:21:04.035 read: IOPS=4337, BW=16.9MiB/s (17.8MB/s)(169MiB/10001msec) 00:21:04.035 slat (usec): min=6, max=105, avg=20.64, stdev= 8.37 00:21:04.035 clat (usec): min=420, max=3133, avg=866.61, stdev=62.28 00:21:04.035 lat (usec): min=427, max=3158, avg=887.24, stdev=65.47 00:21:04.035 clat percentiles (usec): 00:21:04.035 | 1.00th=[ 734], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 824], 00:21:04.035 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 865], 60.00th=[ 881], 00:21:04.035 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 938], 95.00th=[ 955], 00:21:04.035 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1029], 99.95th=[ 1045], 00:21:04.035 | 99.99th=[ 3097] 00:21:04.035 bw ( KiB/s): min=16960, max=18560, per=50.07%, avg=17373.63, stdev=494.48, samples=19 00:21:04.035 iops : min= 4240, max= 4640, avg=4343.37, stdev=123.56, samples=19 00:21:04.035 lat (usec) : 500=0.01%, 750=1.84%, 1000=97.68% 00:21:04.035 lat (msec) : 2=0.45%, 4=0.02% 00:21:04.035 cpu : usr=90.73%, sys=7.83%, ctx=126, majf=0, minf=9 00:21:04.035 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.035 issued rwts: total=43376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.035 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:04.035 filename1: (groupid=0, jobs=1): err= 0: pid=83478: Mon Jul 15 22:47:21 2024 00:21:04.035 read: IOPS=4336, BW=16.9MiB/s (17.8MB/s)(169MiB/10001msec) 00:21:04.035 slat (nsec): min=5326, max=68771, avg=20565.07, stdev=8303.06 00:21:04.035 clat (usec): min=707, max=3307, avg=866.58, stdev=53.77 00:21:04.035 lat (usec): min=716, max=3338, avg=887.15, stdev=56.00 00:21:04.035 clat percentiles (usec): 00:21:04.035 | 1.00th=[ 775], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 832], 00:21:04.035 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 865], 60.00th=[ 873], 00:21:04.035 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 922], 95.00th=[ 938], 00:21:04.035 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1012], 99.95th=[ 1029], 00:21:04.035 | 99.99th=[ 3097] 00:21:04.035 bw ( KiB/s): min=16960, max=18560, per=50.07%, avg=17373.63, stdev=494.59, samples=19 00:21:04.035 iops : min= 4240, max= 4640, avg=4343.37, stdev=123.59, samples=19 00:21:04.035 lat (usec) : 750=0.17%, 1000=99.62% 00:21:04.035 lat (msec) : 2=0.19%, 4=0.02% 00:21:04.035 cpu : usr=91.06%, sys=7.52%, ctx=24, majf=0, minf=0 00:21:04.035 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.035 issued rwts: total=43372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.035 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:04.035 00:21:04.035 Run status group 0 (all jobs): 00:21:04.035 READ: bw=33.9MiB/s (35.5MB/s), 16.9MiB/s-16.9MiB/s (17.8MB/s-17.8MB/s), io=339MiB (355MB), run=10001-10001msec 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 ************************************ 00:21:04.035 END TEST fio_dif_1_multi_subsystems 00:21:04.035 ************************************ 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 00:21:04.035 real 0m11.137s 00:21:04.035 user 0m18.938s 00:21:04.035 sys 0m1.811s 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:04.035 22:47:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:04.035 22:47:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:04.035 22:47:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 ************************************ 00:21:04.035 START TEST fio_dif_rand_params 00:21:04.035 ************************************ 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 bdev_null0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 [2024-07-15 22:47:21.693099] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.035 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.035 { 00:21:04.035 "params": { 00:21:04.035 "name": "Nvme$subsystem", 00:21:04.035 "trtype": "$TEST_TRANSPORT", 00:21:04.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.036 "adrfam": "ipv4", 00:21:04.036 "trsvcid": "$NVMF_PORT", 00:21:04.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.036 "hdgst": ${hdgst:-false}, 00:21:04.036 "ddgst": ${ddgst:-false} 00:21:04.036 }, 00:21:04.036 "method": "bdev_nvme_attach_controller" 00:21:04.036 } 00:21:04.036 EOF 00:21:04.036 )") 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:04.036 "params": { 00:21:04.036 "name": "Nvme0", 00:21:04.036 "trtype": "tcp", 00:21:04.036 "traddr": "10.0.0.2", 00:21:04.036 "adrfam": "ipv4", 00:21:04.036 "trsvcid": "4420", 00:21:04.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:04.036 "hdgst": false, 00:21:04.036 "ddgst": false 00:21:04.036 }, 00:21:04.036 "method": "bdev_nvme_attach_controller" 00:21:04.036 }' 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:04.036 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.295 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:04.295 ... 00:21:04.295 fio-3.35 00:21:04.295 Starting 3 threads 00:21:10.857 00:21:10.857 filename0: (groupid=0, jobs=1): err= 0: pid=83634: Mon Jul 15 22:47:27 2024 00:21:10.857 read: IOPS=250, BW=31.3MiB/s (32.9MB/s)(157MiB/5012msec) 00:21:10.857 slat (usec): min=7, max=128, avg=22.50, stdev=11.80 00:21:10.857 clat (usec): min=10699, max=14123, avg=11913.35, stdev=184.84 00:21:10.857 lat (usec): min=10707, max=14157, avg=11935.85, stdev=185.64 00:21:10.857 clat percentiles (usec): 00:21:10.857 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11863], 00:21:10.857 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:21:10.857 | 70.00th=[11994], 80.00th=[11994], 90.00th=[11994], 95.00th=[12125], 00:21:10.857 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14091], 99.95th=[14091], 00:21:10.857 | 99.99th=[14091] 00:21:10.857 bw ( KiB/s): min=31488, max=32256, per=33.33%, avg=32102.40, stdev=323.82, samples=10 00:21:10.857 iops : min= 246, max= 252, avg=250.80, stdev= 2.53, samples=10 00:21:10.857 lat (msec) : 20=100.00% 00:21:10.857 cpu : usr=94.87%, sys=4.21%, ctx=148, majf=0, minf=9 00:21:10.857 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.857 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:10.857 filename0: (groupid=0, jobs=1): err= 0: pid=83635: Mon Jul 15 22:47:27 2024 00:21:10.857 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5009msec) 00:21:10.857 slat (nsec): min=7791, max=60309, avg=26124.18, stdev=12661.06 00:21:10.857 clat (usec): min=9482, max=15006, avg=11895.06, stdev=220.54 00:21:10.857 lat (usec): min=9491, max=15032, avg=11921.18, stdev=221.37 00:21:10.857 clat percentiles (usec): 00:21:10.857 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11863], 00:21:10.857 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:21:10.857 | 70.00th=[11994], 80.00th=[11994], 90.00th=[11994], 95.00th=[12125], 00:21:10.857 | 99.00th=[12256], 99.50th=[12387], 99.90th=[15008], 99.95th=[15008], 00:21:10.857 | 99.99th=[15008] 00:21:10.857 bw ( KiB/s): min=31488, max=32256, per=33.34%, avg=32108.70, stdev=310.89, samples=10 00:21:10.857 iops : min= 246, max= 252, avg=250.80, stdev= 2.53, samples=10 00:21:10.857 lat (msec) : 10=0.24%, 20=99.76% 00:21:10.857 cpu : usr=94.77%, sys=4.65%, ctx=6, majf=0, minf=9 00:21:10.857 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.857 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:10.857 filename0: (groupid=0, jobs=1): err= 0: pid=83636: Mon Jul 15 22:47:27 2024 00:21:10.857 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5010msec) 00:21:10.857 slat (nsec): min=7824, max=58992, avg=26266.47, stdev=12642.74 00:21:10.857 clat (usec): min=10598, max=13909, avg=11897.82, stdev=161.99 00:21:10.857 lat (usec): min=10606, max=13947, avg=11924.09, stdev=162.96 00:21:10.857 clat percentiles (usec): 00:21:10.857 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11863], 00:21:10.857 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:21:10.857 | 70.00th=[11994], 80.00th=[11994], 90.00th=[11994], 95.00th=[12125], 00:21:10.857 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13960], 99.95th=[13960], 00:21:10.857 | 99.99th=[13960] 00:21:10.857 bw ( KiB/s): min=31488, max=32256, per=33.33%, avg=32102.40, stdev=323.82, samples=10 00:21:10.857 iops : min= 246, max= 252, avg=250.80, stdev= 2.53, samples=10 00:21:10.857 lat (msec) : 20=100.00% 00:21:10.857 cpu : usr=95.39%, sys=4.07%, ctx=26, majf=0, minf=9 00:21:10.857 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.857 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.857 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:10.857 00:21:10.858 Run status group 0 (all jobs): 00:21:10.858 READ: bw=94.0MiB/s (98.6MB/s), 31.3MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=471MiB (494MB), run=5009-5012msec 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 bdev_null0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 [2024-07-15 22:47:27.714542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 bdev_null1 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 bdev_null2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:10.858 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.859 { 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme$subsystem", 00:21:10.859 "trtype": "$TEST_TRANSPORT", 00:21:10.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "$NVMF_PORT", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.859 "hdgst": ${hdgst:-false}, 00:21:10.859 "ddgst": ${ddgst:-false} 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 } 00:21:10.859 EOF 00:21:10.859 )") 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.859 { 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme$subsystem", 00:21:10.859 "trtype": "$TEST_TRANSPORT", 00:21:10.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "$NVMF_PORT", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.859 "hdgst": ${hdgst:-false}, 00:21:10.859 "ddgst": ${ddgst:-false} 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 } 00:21:10.859 EOF 00:21:10.859 )") 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.859 { 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme$subsystem", 00:21:10.859 "trtype": "$TEST_TRANSPORT", 00:21:10.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "$NVMF_PORT", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.859 "hdgst": ${hdgst:-false}, 00:21:10.859 "ddgst": ${ddgst:-false} 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 } 00:21:10.859 EOF 00:21:10.859 )") 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme0", 00:21:10.859 "trtype": "tcp", 00:21:10.859 "traddr": "10.0.0.2", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "4420", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.859 "hdgst": false, 00:21:10.859 "ddgst": false 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 },{ 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme1", 00:21:10.859 "trtype": "tcp", 00:21:10.859 "traddr": "10.0.0.2", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "4420", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.859 "hdgst": false, 00:21:10.859 "ddgst": false 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 },{ 00:21:10.859 "params": { 00:21:10.859 "name": "Nvme2", 00:21:10.859 "trtype": "tcp", 00:21:10.859 "traddr": "10.0.0.2", 00:21:10.859 "adrfam": "ipv4", 00:21:10.859 "trsvcid": "4420", 00:21:10.859 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.859 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:10.859 "hdgst": false, 00:21:10.859 "ddgst": false 00:21:10.859 }, 00:21:10.859 "method": "bdev_nvme_attach_controller" 00:21:10.859 }' 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.859 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.860 22:47:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.860 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:10.860 ... 00:21:10.860 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:10.860 ... 00:21:10.860 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:10.860 ... 00:21:10.860 fio-3.35 00:21:10.860 Starting 24 threads 00:21:23.119 00:21:23.119 filename0: (groupid=0, jobs=1): err= 0: pid=83731: Mon Jul 15 22:47:38 2024 00:21:23.119 read: IOPS=222, BW=891KiB/s (913kB/s)(8956KiB/10046msec) 00:21:23.119 slat (usec): min=7, max=7057, avg=26.22, stdev=191.26 00:21:23.119 clat (msec): min=8, max=173, avg=71.57, stdev=20.99 00:21:23.119 lat (msec): min=8, max=173, avg=71.60, stdev=20.99 00:21:23.119 clat percentiles (msec): 00:21:23.119 | 1.00th=[ 11], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53], 00:21:23.119 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:21:23.119 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 110], 00:21:23.119 | 99.00th=[ 125], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:21:23.119 | 99.99th=[ 174] 00:21:23.119 bw ( KiB/s): min= 640, max= 1320, per=4.22%, avg=891.15, stdev=149.94, samples=20 00:21:23.119 iops : min= 160, max= 330, avg=222.75, stdev=37.47, samples=20 00:21:23.119 lat (msec) : 10=0.58%, 20=1.56%, 50=13.76%, 100=75.93%, 250=8.17% 00:21:23.119 cpu : usr=36.82%, sys=1.46%, ctx=1117, majf=0, minf=9 00:21:23.119 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.119 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.119 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.119 filename0: (groupid=0, jobs=1): err= 0: pid=83732: Mon Jul 15 22:47:38 2024 00:21:23.119 read: IOPS=221, BW=885KiB/s (906kB/s)(8880KiB/10035msec) 00:21:23.119 slat (usec): min=4, max=8035, avg=29.49, stdev=225.57 00:21:23.119 clat (msec): min=16, max=137, avg=72.12, stdev=18.89 00:21:23.119 lat (msec): min=16, max=137, avg=72.15, stdev=18.89 00:21:23.119 clat percentiles (msec): 00:21:23.119 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:21:23.119 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:21:23.119 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:21:23.119 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:21:23.119 | 99.99th=[ 138] 00:21:23.119 bw ( KiB/s): min= 592, max= 1015, per=4.19%, avg=883.60, stdev=96.14, samples=20 00:21:23.119 iops : min= 148, max= 253, avg=220.85, stdev=23.99, samples=20 00:21:23.119 lat (msec) : 20=0.72%, 50=12.88%, 100=78.02%, 250=8.38% 00:21:23.119 cpu : usr=42.98%, sys=1.55%, ctx=1232, majf=0, minf=9 00:21:23.119 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.119 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.119 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.119 filename0: (groupid=0, jobs=1): err= 0: pid=83733: Mon Jul 15 22:47:38 2024 00:21:23.119 read: IOPS=218, BW=873KiB/s (894kB/s)(8756KiB/10032msec) 00:21:23.119 slat (usec): min=7, max=11030, avg=35.54, stdev=361.25 00:21:23.119 clat (msec): min=29, max=178, avg=73.17, stdev=19.01 00:21:23.119 lat (msec): min=29, max=178, avg=73.21, stdev=19.00 00:21:23.119 clat percentiles (msec): 00:21:23.119 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:21:23.119 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:21:23.119 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:21:23.119 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:21:23.119 | 99.99th=[ 180] 00:21:23.119 bw ( KiB/s): min= 560, max= 1008, per=4.12%, avg=869.25, stdev=102.25, samples=20 00:21:23.119 iops : min= 140, max= 252, avg=217.30, stdev=25.57, samples=20 00:21:23.119 lat (msec) : 50=13.66%, 100=77.89%, 250=8.45% 00:21:23.119 cpu : usr=33.64%, sys=1.25%, ctx=913, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename0: (groupid=0, jobs=1): err= 0: pid=83734: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=223, BW=893KiB/s (914kB/s)(8936KiB/10011msec) 00:21:23.120 slat (usec): min=3, max=9029, avg=26.13, stdev=190.90 00:21:23.120 clat (msec): min=12, max=139, avg=71.56, stdev=18.71 00:21:23.120 lat (msec): min=12, max=139, avg=71.59, stdev=18.71 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:21:23.120 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:23.120 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:21:23.120 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:21:23.120 | 99.99th=[ 140] 00:21:23.120 bw ( KiB/s): min= 616, max= 976, per=4.18%, avg=882.42, stdev=99.87, samples=19 00:21:23.120 iops : min= 154, max= 244, avg=220.58, stdev=24.96, samples=19 00:21:23.120 lat (msec) : 20=0.72%, 50=15.89%, 100=76.54%, 250=6.85% 00:21:23.120 cpu : usr=33.42%, sys=1.41%, ctx=977, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename0: (groupid=0, jobs=1): err= 0: pid=83735: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=235, BW=942KiB/s (965kB/s)(9428KiB/10008msec) 00:21:23.120 slat (usec): min=4, max=8030, avg=31.34, stdev=218.36 00:21:23.120 clat (msec): min=7, max=144, avg=67.79, stdev=20.20 00:21:23.120 lat (msec): min=7, max=144, avg=67.82, stdev=20.20 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:21:23.120 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:21:23.120 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 108], 00:21:23.120 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 144], 00:21:23.120 | 99.99th=[ 144] 00:21:23.120 bw ( KiB/s): min= 608, max= 1104, per=4.40%, avg=929.26, stdev=119.26, samples=19 00:21:23.120 iops : min= 152, max= 276, avg=232.32, stdev=29.82, samples=19 00:21:23.120 lat (msec) : 10=0.30%, 20=0.68%, 50=20.53%, 100=71.87%, 250=6.62% 00:21:23.120 cpu : usr=40.99%, sys=1.41%, ctx=1226, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename0: (groupid=0, jobs=1): err= 0: pid=83736: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=232, BW=931KiB/s (953kB/s)(9312KiB/10003msec) 00:21:23.120 slat (usec): min=3, max=10036, avg=40.21, stdev=363.47 00:21:23.120 clat (msec): min=5, max=165, avg=68.58, stdev=21.17 00:21:23.120 lat (msec): min=5, max=165, avg=68.62, stdev=21.17 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 50], 00:21:23.120 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:21:23.120 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 109], 00:21:23.120 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 167], 00:21:23.120 | 99.99th=[ 167] 00:21:23.120 bw ( KiB/s): min= 616, max= 1072, per=4.33%, avg=914.21, stdev=112.99, samples=19 00:21:23.120 iops : min= 154, max= 268, avg=228.53, stdev=28.26, samples=19 00:21:23.120 lat (msec) : 10=0.69%, 20=0.73%, 50=21.22%, 100=69.63%, 250=7.73% 00:21:23.120 cpu : usr=39.01%, sys=1.56%, ctx=1307, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename0: (groupid=0, jobs=1): err= 0: pid=83737: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=203, BW=814KiB/s (834kB/s)(8156KiB/10018msec) 00:21:23.120 slat (usec): min=4, max=8042, avg=38.36, stdev=346.74 00:21:23.120 clat (msec): min=31, max=156, avg=78.41, stdev=19.01 00:21:23.120 lat (msec): min=31, max=156, avg=78.45, stdev=19.03 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 64], 00:21:23.120 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:21:23.120 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 114], 00:21:23.120 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 157], 00:21:23.120 | 99.99th=[ 157] 00:21:23.120 bw ( KiB/s): min= 616, max= 968, per=3.84%, avg=809.20, stdev=93.02, samples=20 00:21:23.120 iops : min= 154, max= 242, avg=202.30, stdev=23.25, samples=20 00:21:23.120 lat (msec) : 50=8.48%, 100=79.01%, 250=12.51% 00:21:23.120 cpu : usr=37.73%, sys=1.42%, ctx=1055, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=3.0%, 4=12.1%, 8=70.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename0: (groupid=0, jobs=1): err= 0: pid=83738: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=223, BW=892KiB/s (914kB/s)(8928KiB/10007msec) 00:21:23.120 slat (usec): min=4, max=8039, avg=34.43, stdev=328.80 00:21:23.120 clat (msec): min=7, max=172, avg=71.54, stdev=20.51 00:21:23.120 lat (msec): min=7, max=172, avg=71.57, stdev=20.51 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:21:23.120 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:21:23.120 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:21:23.120 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 171], 00:21:23.120 | 99.99th=[ 174] 00:21:23.120 bw ( KiB/s): min= 616, max= 1024, per=4.16%, avg=877.26, stdev=115.38, samples=19 00:21:23.120 iops : min= 154, max= 256, avg=219.32, stdev=28.84, samples=19 00:21:23.120 lat (msec) : 10=0.31%, 20=0.72%, 50=18.23%, 100=72.18%, 250=8.56% 00:21:23.120 cpu : usr=33.74%, sys=1.14%, ctx=913, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename1: (groupid=0, jobs=1): err= 0: pid=83739: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=216, BW=867KiB/s (888kB/s)(8724KiB/10061msec) 00:21:23.120 slat (usec): min=4, max=8040, avg=33.06, stdev=293.92 00:21:23.120 clat (usec): min=1553, max=179626, avg=73495.35, stdev=26386.19 00:21:23.120 lat (usec): min=1563, max=179650, avg=73528.41, stdev=26393.82 00:21:23.120 clat percentiles (usec): 00:21:23.120 | 1.00th=[ 1614], 5.00th=[ 3163], 10.00th=[ 47449], 20.00th=[ 62129], 00:21:23.120 | 30.00th=[ 68682], 40.00th=[ 71828], 50.00th=[ 72877], 60.00th=[ 79168], 00:21:23.120 | 70.00th=[ 82314], 80.00th=[ 92799], 90.00th=[103285], 95.00th=[113771], 00:21:23.120 | 99.00th=[131597], 99.50th=[143655], 99.90th=[147850], 99.95th=[156238], 00:21:23.120 | 99.99th=[179307] 00:21:23.120 bw ( KiB/s): min= 552, max= 2160, per=4.10%, avg=865.90, stdev=317.29, samples=20 00:21:23.120 iops : min= 138, max= 540, avg=216.45, stdev=79.32, samples=20 00:21:23.120 lat (msec) : 2=3.03%, 4=2.11%, 10=1.33%, 20=0.87%, 50=5.41% 00:21:23.120 lat (msec) : 100=76.11%, 250=11.14% 00:21:23.120 cpu : usr=41.88%, sys=1.56%, ctx=1321, majf=0, minf=0 00:21:23.120 IO depths : 1=0.4%, 2=3.8%, 4=13.7%, 8=68.0%, 16=14.2%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=91.3%, 8=5.7%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename1: (groupid=0, jobs=1): err= 0: pid=83740: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=222, BW=891KiB/s (913kB/s)(8916KiB/10004msec) 00:21:23.120 slat (usec): min=4, max=7998, avg=30.33, stdev=226.46 00:21:23.120 clat (msec): min=5, max=150, avg=71.66, stdev=20.71 00:21:23.120 lat (msec): min=5, max=150, avg=71.69, stdev=20.70 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:21:23.120 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:21:23.120 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:21:23.120 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 150], 00:21:23.120 | 99.99th=[ 150] 00:21:23.120 bw ( KiB/s): min= 616, max= 1000, per=4.14%, avg=873.79, stdev=119.53, samples=19 00:21:23.120 iops : min= 154, max= 250, avg=218.42, stdev=29.88, samples=19 00:21:23.120 lat (msec) : 10=0.58%, 20=0.72%, 50=16.96%, 100=74.34%, 250=7.40% 00:21:23.120 cpu : usr=34.33%, sys=1.22%, ctx=951, majf=0, minf=9 00:21:23.120 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.120 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.120 filename1: (groupid=0, jobs=1): err= 0: pid=83741: Mon Jul 15 22:47:38 2024 00:21:23.120 read: IOPS=225, BW=903KiB/s (925kB/s)(9068KiB/10043msec) 00:21:23.120 slat (usec): min=5, max=8036, avg=24.32, stdev=188.62 00:21:23.120 clat (msec): min=13, max=154, avg=70.68, stdev=20.14 00:21:23.120 lat (msec): min=13, max=154, avg=70.70, stdev=20.15 00:21:23.120 clat percentiles (msec): 00:21:23.120 | 1.00th=[ 17], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:21:23.120 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:21:23.120 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:23.120 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 144], 00:21:23.120 | 99.99th=[ 155] 00:21:23.120 bw ( KiB/s): min= 584, max= 1179, per=4.28%, avg=902.55, stdev=131.85, samples=20 00:21:23.121 iops : min= 146, max= 294, avg=225.60, stdev=32.88, samples=20 00:21:23.121 lat (msec) : 20=1.41%, 50=17.64%, 100=73.14%, 250=7.81% 00:21:23.121 cpu : usr=38.53%, sys=1.33%, ctx=1099, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename1: (groupid=0, jobs=1): err= 0: pid=83742: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=230, BW=923KiB/s (945kB/s)(9236KiB/10003msec) 00:21:23.121 slat (usec): min=4, max=8070, avg=35.77, stdev=256.08 00:21:23.121 clat (msec): min=5, max=144, avg=69.10, stdev=20.78 00:21:23.121 lat (msec): min=5, max=144, avg=69.14, stdev=20.79 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 16], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 51], 00:21:23.121 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 73], 00:21:23.121 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:21:23.121 | 99.00th=[ 122], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:21:23.121 | 99.99th=[ 144] 00:21:23.121 bw ( KiB/s): min= 616, max= 1080, per=4.30%, avg=907.00, stdev=117.69, samples=19 00:21:23.121 iops : min= 154, max= 270, avg=226.74, stdev=29.42, samples=19 00:21:23.121 lat (msec) : 10=0.56%, 20=0.69%, 50=17.54%, 100=72.93%, 250=8.27% 00:21:23.121 cpu : usr=41.34%, sys=1.55%, ctx=1649, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename1: (groupid=0, jobs=1): err= 0: pid=83743: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=203, BW=815KiB/s (834kB/s)(8156KiB/10013msec) 00:21:23.121 slat (usec): min=3, max=8032, avg=24.90, stdev=179.07 00:21:23.121 clat (msec): min=24, max=145, avg=78.39, stdev=18.93 00:21:23.121 lat (msec): min=24, max=145, avg=78.41, stdev=18.93 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 67], 00:21:23.121 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:23.121 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 113], 00:21:23.121 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 146], 99.95th=[ 146], 00:21:23.121 | 99.99th=[ 146] 00:21:23.121 bw ( KiB/s): min= 652, max= 976, per=3.82%, avg=805.47, stdev=92.29, samples=19 00:21:23.121 iops : min= 163, max= 244, avg=201.32, stdev=23.10, samples=19 00:21:23.121 lat (msec) : 50=10.20%, 100=76.95%, 250=12.85% 00:21:23.121 cpu : usr=32.88%, sys=1.21%, ctx=904, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=3.0%, 4=12.1%, 8=70.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename1: (groupid=0, jobs=1): err= 0: pid=83744: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=226, BW=906KiB/s (928kB/s)(9100KiB/10045msec) 00:21:23.121 slat (usec): min=3, max=8046, avg=28.98, stdev=265.83 00:21:23.121 clat (msec): min=8, max=155, avg=70.43, stdev=21.23 00:21:23.121 lat (msec): min=8, max=155, avg=70.46, stdev=21.24 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 51], 00:21:23.121 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:21:23.121 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:23.121 | 99.00th=[ 123], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:21:23.121 | 99.99th=[ 157] 00:21:23.121 bw ( KiB/s): min= 560, max= 1402, per=4.29%, avg=905.65, stdev=162.52, samples=20 00:21:23.121 iops : min= 140, max= 350, avg=226.35, stdev=40.53, samples=20 00:21:23.121 lat (msec) : 10=0.70%, 20=0.70%, 50=17.89%, 100=72.66%, 250=8.04% 00:21:23.121 cpu : usr=39.52%, sys=1.53%, ctx=1154, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename1: (groupid=0, jobs=1): err= 0: pid=83745: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=224, BW=899KiB/s (921kB/s)(8992KiB/10002msec) 00:21:23.121 slat (usec): min=7, max=8056, avg=36.26, stdev=305.98 00:21:23.121 clat (usec): min=1832, max=165485, avg=70984.32, stdev=21295.56 00:21:23.121 lat (usec): min=1840, max=165522, avg=71020.58, stdev=21298.65 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:21:23.121 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:21:23.121 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:21:23.121 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 165], 00:21:23.121 | 99.99th=[ 165] 00:21:23.121 bw ( KiB/s): min= 640, max= 1024, per=4.15%, avg=876.53, stdev=110.38, samples=19 00:21:23.121 iops : min= 160, max= 256, avg=219.11, stdev=27.57, samples=19 00:21:23.121 lat (msec) : 2=0.71%, 4=0.13%, 10=0.58%, 20=0.71%, 50=14.99% 00:21:23.121 lat (msec) : 100=73.80%, 250=9.07% 00:21:23.121 cpu : usr=38.05%, sys=1.47%, ctx=1088, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=78.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename1: (groupid=0, jobs=1): err= 0: pid=83746: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=202, BW=809KiB/s (828kB/s)(8112KiB/10031msec) 00:21:23.121 slat (usec): min=5, max=4044, avg=23.20, stdev=124.58 00:21:23.121 clat (msec): min=31, max=156, avg=78.95, stdev=19.09 00:21:23.121 lat (msec): min=31, max=156, avg=78.97, stdev=19.09 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 67], 00:21:23.121 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:21:23.121 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 115], 00:21:23.121 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 157], 00:21:23.121 | 99.99th=[ 157] 00:21:23.121 bw ( KiB/s): min= 616, max= 968, per=3.81%, avg=804.25, stdev=112.55, samples=20 00:21:23.121 iops : min= 154, max= 242, avg=201.00, stdev=28.14, samples=20 00:21:23.121 lat (msec) : 50=7.84%, 100=81.07%, 250=11.09% 00:21:23.121 cpu : usr=38.14%, sys=1.55%, ctx=1143, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=3.3%, 4=13.2%, 8=69.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=90.9%, 8=6.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename2: (groupid=0, jobs=1): err= 0: pid=83747: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=224, BW=898KiB/s (920kB/s)(8988KiB/10007msec) 00:21:23.121 slat (usec): min=7, max=8037, avg=49.85, stdev=451.68 00:21:23.121 clat (msec): min=7, max=148, avg=71.06, stdev=20.18 00:21:23.121 lat (msec): min=7, max=148, avg=71.11, stdev=20.18 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:21:23.121 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 75], 00:21:23.121 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 111], 00:21:23.121 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:21:23.121 | 99.99th=[ 148] 00:21:23.121 bw ( KiB/s): min= 616, max= 1024, per=4.19%, avg=883.42, stdev=117.89, samples=19 00:21:23.121 iops : min= 154, max= 256, avg=220.84, stdev=29.49, samples=19 00:21:23.121 lat (msec) : 10=0.27%, 20=0.71%, 50=16.91%, 100=74.14%, 250=7.97% 00:21:23.121 cpu : usr=36.27%, sys=1.57%, ctx=1095, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename2: (groupid=0, jobs=1): err= 0: pid=83748: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=207, BW=829KiB/s (849kB/s)(8320KiB/10031msec) 00:21:23.121 slat (usec): min=5, max=8063, avg=25.92, stdev=197.34 00:21:23.121 clat (msec): min=31, max=143, avg=76.99, stdev=19.68 00:21:23.121 lat (msec): min=31, max=143, avg=77.02, stdev=19.69 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:21:23.121 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:23.121 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 111], 00:21:23.121 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:21:23.121 | 99.99th=[ 144] 00:21:23.121 bw ( KiB/s): min= 616, max= 1008, per=3.91%, avg=825.65, stdev=116.23, samples=20 00:21:23.121 iops : min= 154, max= 252, avg=206.40, stdev=29.05, samples=20 00:21:23.121 lat (msec) : 50=11.73%, 100=76.73%, 250=11.54% 00:21:23.121 cpu : usr=37.50%, sys=1.33%, ctx=1177, majf=0, minf=9 00:21:23.121 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 complete : 0=0.0%, 4=90.3%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.121 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.121 filename2: (groupid=0, jobs=1): err= 0: pid=83749: Mon Jul 15 22:47:38 2024 00:21:23.121 read: IOPS=228, BW=914KiB/s (935kB/s)(9152KiB/10018msec) 00:21:23.121 slat (usec): min=3, max=8029, avg=25.01, stdev=167.77 00:21:23.121 clat (msec): min=27, max=142, avg=69.92, stdev=19.50 00:21:23.121 lat (msec): min=27, max=142, avg=69.95, stdev=19.50 00:21:23.121 clat percentiles (msec): 00:21:23.121 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:21:23.121 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:21:23.121 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:21:23.121 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:21:23.121 | 99.99th=[ 142] 00:21:23.121 bw ( KiB/s): min= 672, max= 1080, per=4.32%, avg=911.79, stdev=97.47, samples=19 00:21:23.121 iops : min= 168, max= 270, avg=227.89, stdev=24.34, samples=19 00:21:23.122 lat (msec) : 50=20.32%, 100=71.81%, 250=7.87% 00:21:23.122 cpu : usr=43.44%, sys=1.72%, ctx=1582, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 filename2: (groupid=0, jobs=1): err= 0: pid=83750: Mon Jul 15 22:47:38 2024 00:21:23.122 read: IOPS=226, BW=904KiB/s (926kB/s)(9072KiB/10032msec) 00:21:23.122 slat (usec): min=3, max=8029, avg=26.05, stdev=191.94 00:21:23.122 clat (msec): min=19, max=152, avg=70.60, stdev=19.46 00:21:23.122 lat (msec): min=19, max=152, avg=70.63, stdev=19.46 00:21:23.122 clat percentiles (msec): 00:21:23.122 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53], 00:21:23.122 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:21:23.122 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 108], 00:21:23.122 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:21:23.122 | 99.99th=[ 153] 00:21:23.122 bw ( KiB/s): min= 616, max= 1024, per=4.28%, avg=902.85, stdev=99.20, samples=20 00:21:23.122 iops : min= 154, max= 256, avg=225.70, stdev=24.79, samples=20 00:21:23.122 lat (msec) : 20=0.62%, 50=16.36%, 100=75.53%, 250=7.50% 00:21:23.122 cpu : usr=41.28%, sys=1.35%, ctx=1699, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 filename2: (groupid=0, jobs=1): err= 0: pid=83751: Mon Jul 15 22:47:38 2024 00:21:23.122 read: IOPS=223, BW=893KiB/s (914kB/s)(8968KiB/10045msec) 00:21:23.122 slat (usec): min=7, max=8049, avg=40.74, stdev=419.20 00:21:23.122 clat (msec): min=8, max=148, avg=71.42, stdev=20.32 00:21:23.122 lat (msec): min=8, max=148, avg=71.46, stdev=20.33 00:21:23.122 clat percentiles (msec): 00:21:23.122 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:21:23.122 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:23.122 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:23.122 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 148], 99.95th=[ 148], 00:21:23.122 | 99.99th=[ 150] 00:21:23.122 bw ( KiB/s): min= 584, max= 1392, per=4.23%, avg=892.35, stdev=157.35, samples=20 00:21:23.122 iops : min= 146, max= 348, avg=223.05, stdev=39.31, samples=20 00:21:23.122 lat (msec) : 10=0.71%, 20=1.43%, 50=13.96%, 100=75.83%, 250=8.07% 00:21:23.122 cpu : usr=33.41%, sys=1.43%, ctx=967, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 filename2: (groupid=0, jobs=1): err= 0: pid=83752: Mon Jul 15 22:47:38 2024 00:21:23.122 read: IOPS=201, BW=806KiB/s (825kB/s)(8088KiB/10040msec) 00:21:23.122 slat (usec): min=3, max=8056, avg=39.29, stdev=383.65 00:21:23.122 clat (msec): min=19, max=153, avg=79.09, stdev=19.22 00:21:23.122 lat (msec): min=19, max=153, avg=79.13, stdev=19.24 00:21:23.122 clat percentiles (msec): 00:21:23.122 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 66], 00:21:23.122 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:21:23.122 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 113], 00:21:23.122 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 153], 00:21:23.122 | 99.99th=[ 153] 00:21:23.122 bw ( KiB/s): min= 608, max= 1015, per=3.81%, avg=804.40, stdev=108.20, samples=20 00:21:23.122 iops : min= 152, max= 253, avg=201.05, stdev=26.97, samples=20 00:21:23.122 lat (msec) : 20=0.69%, 50=7.12%, 100=81.45%, 250=10.73% 00:21:23.122 cpu : usr=33.03%, sys=1.45%, ctx=1004, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=2.9%, 4=11.6%, 8=70.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 filename2: (groupid=0, jobs=1): err= 0: pid=83753: Mon Jul 15 22:47:38 2024 00:21:23.122 read: IOPS=222, BW=889KiB/s (910kB/s)(8928KiB/10041msec) 00:21:23.122 slat (usec): min=7, max=8035, avg=39.72, stdev=388.05 00:21:23.122 clat (msec): min=14, max=168, avg=71.71, stdev=20.04 00:21:23.122 lat (msec): min=14, max=168, avg=71.75, stdev=20.04 00:21:23.122 clat percentiles (msec): 00:21:23.122 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:21:23.122 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:21:23.122 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:21:23.122 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 157], 00:21:23.122 | 99.99th=[ 169] 00:21:23.122 bw ( KiB/s): min= 592, max= 1147, per=4.21%, avg=888.55, stdev=117.16, samples=20 00:21:23.122 iops : min= 148, max= 286, avg=222.10, stdev=29.20, samples=20 00:21:23.122 lat (msec) : 20=1.43%, 50=14.07%, 100=76.97%, 250=7.53% 00:21:23.122 cpu : usr=36.36%, sys=1.39%, ctx=1029, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 filename2: (groupid=0, jobs=1): err= 0: pid=83754: Mon Jul 15 22:47:38 2024 00:21:23.122 read: IOPS=225, BW=903KiB/s (925kB/s)(9044KiB/10010msec) 00:21:23.122 slat (usec): min=5, max=8054, avg=43.46, stdev=386.99 00:21:23.122 clat (msec): min=9, max=142, avg=70.61, stdev=19.82 00:21:23.122 lat (msec): min=9, max=142, avg=70.65, stdev=19.81 00:21:23.122 clat percentiles (msec): 00:21:23.122 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:21:23.122 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:21:23.122 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:21:23.122 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:23.122 | 99.99th=[ 144] 00:21:23.122 bw ( KiB/s): min= 592, max= 1048, per=4.27%, avg=900.70, stdev=110.31, samples=20 00:21:23.122 iops : min= 148, max= 262, avg=225.15, stdev=27.58, samples=20 00:21:23.122 lat (msec) : 10=0.13%, 20=0.40%, 50=18.75%, 100=72.36%, 250=8.36% 00:21:23.122 cpu : usr=35.25%, sys=1.37%, ctx=997, majf=0, minf=9 00:21:23.122 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:23.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.122 issued rwts: total=2261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:23.122 00:21:23.122 Run status group 0 (all jobs): 00:21:23.122 READ: bw=20.6MiB/s (21.6MB/s), 806KiB/s-942KiB/s (825kB/s-965kB/s), io=207MiB (217MB), run=10002-10061msec 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:23.122 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 bdev_null0 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 [2024-07-15 22:47:39.219106] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 bdev_null1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.123 { 00:21:23.123 "params": { 00:21:23.123 "name": "Nvme$subsystem", 00:21:23.123 "trtype": "$TEST_TRANSPORT", 00:21:23.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.123 "adrfam": "ipv4", 00:21:23.123 "trsvcid": "$NVMF_PORT", 00:21:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.123 "hdgst": ${hdgst:-false}, 00:21:23.123 "ddgst": ${ddgst:-false} 00:21:23.123 }, 00:21:23.123 "method": "bdev_nvme_attach_controller" 00:21:23.123 } 00:21:23.123 EOF 00:21:23.123 )") 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.123 { 00:21:23.123 "params": { 00:21:23.123 "name": "Nvme$subsystem", 00:21:23.123 "trtype": "$TEST_TRANSPORT", 00:21:23.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.123 "adrfam": "ipv4", 00:21:23.123 "trsvcid": "$NVMF_PORT", 00:21:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.123 "hdgst": ${hdgst:-false}, 00:21:23.123 "ddgst": ${ddgst:-false} 00:21:23.123 }, 00:21:23.123 "method": "bdev_nvme_attach_controller" 00:21:23.123 } 00:21:23.123 EOF 00:21:23.123 )") 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.123 "params": { 00:21:23.123 "name": "Nvme0", 00:21:23.123 "trtype": "tcp", 00:21:23.123 "traddr": "10.0.0.2", 00:21:23.123 "adrfam": "ipv4", 00:21:23.123 "trsvcid": "4420", 00:21:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.123 "hdgst": false, 00:21:23.123 "ddgst": false 00:21:23.123 }, 00:21:23.123 "method": "bdev_nvme_attach_controller" 00:21:23.123 },{ 00:21:23.123 "params": { 00:21:23.123 "name": "Nvme1", 00:21:23.123 "trtype": "tcp", 00:21:23.123 "traddr": "10.0.0.2", 00:21:23.123 "adrfam": "ipv4", 00:21:23.123 "trsvcid": "4420", 00:21:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.123 "hdgst": false, 00:21:23.123 "ddgst": false 00:21:23.123 }, 00:21:23.123 "method": "bdev_nvme_attach_controller" 00:21:23.123 }' 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:23.123 22:47:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.123 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:23.123 ... 00:21:23.123 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:23.123 ... 00:21:23.123 fio-3.35 00:21:23.123 Starting 4 threads 00:21:27.347 00:21:27.347 filename0: (groupid=0, jobs=1): err= 0: pid=83892: Mon Jul 15 22:47:45 2024 00:21:27.347 read: IOPS=1739, BW=13.6MiB/s (14.2MB/s)(68.0MiB/5001msec) 00:21:27.347 slat (nsec): min=7122, max=67845, avg=17272.79, stdev=8748.94 00:21:27.347 clat (usec): min=1592, max=7074, avg=4537.69, stdev=869.38 00:21:27.347 lat (usec): min=1608, max=7098, avg=4554.96, stdev=868.44 00:21:27.347 clat percentiles (usec): 00:21:27.347 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 4490], 00:21:27.347 | 30.00th=[ 4686], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:21:27.347 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5211], 00:21:27.347 | 99.00th=[ 6259], 99.50th=[ 6325], 99.90th=[ 6521], 99.95th=[ 6587], 00:21:27.347 | 99.99th=[ 7046] 00:21:27.347 bw ( KiB/s): min=12254, max=17117, per=21.82%, avg=13980.00, stdev=1911.10, samples=9 00:21:27.347 iops : min= 1531, max= 2139, avg=1747.33, stdev=238.82, samples=9 00:21:27.347 lat (msec) : 2=0.28%, 4=16.44%, 10=83.29% 00:21:27.347 cpu : usr=94.44%, sys=4.72%, ctx=7, majf=0, minf=0 00:21:27.347 IO depths : 1=0.1%, 2=17.7%, 4=54.0%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:27.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 issued rwts: total=8699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.347 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:27.347 filename0: (groupid=0, jobs=1): err= 0: pid=83893: Mon Jul 15 22:47:45 2024 00:21:27.347 read: IOPS=1856, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:21:27.347 slat (nsec): min=7122, max=74326, avg=17271.00, stdev=9062.90 00:21:27.347 clat (usec): min=919, max=7422, avg=4252.90, stdev=993.58 00:21:27.347 lat (usec): min=928, max=7458, avg=4270.17, stdev=993.73 00:21:27.347 clat percentiles (usec): 00:21:27.347 | 1.00th=[ 1876], 5.00th=[ 2311], 10.00th=[ 2638], 20.00th=[ 2966], 00:21:27.347 | 30.00th=[ 4293], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:21:27.347 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5145], 00:21:27.347 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 6521], 99.95th=[ 6587], 00:21:27.347 | 99.99th=[ 7439] 00:21:27.347 bw ( KiB/s): min=12416, max=17440, per=23.45%, avg=15025.78, stdev=2069.58, samples=9 00:21:27.347 iops : min= 1552, max= 2180, avg=1878.22, stdev=258.70, samples=9 00:21:27.347 lat (usec) : 1000=0.03% 00:21:27.347 lat (msec) : 2=1.18%, 4=27.49%, 10=71.29% 00:21:27.347 cpu : usr=94.62%, sys=4.54%, ctx=9, majf=0, minf=0 00:21:27.347 IO depths : 1=0.1%, 2=12.9%, 4=56.8%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:27.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 issued rwts: total=9284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.347 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:27.347 filename1: (groupid=0, jobs=1): err= 0: pid=83894: Mon Jul 15 22:47:45 2024 00:21:27.347 read: IOPS=2198, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5003msec) 00:21:27.347 slat (nsec): min=6336, max=70604, avg=18572.92, stdev=8681.14 00:21:27.347 clat (usec): min=1345, max=7080, avg=3593.81, stdev=1079.33 00:21:27.347 lat (usec): min=1360, max=7123, avg=3612.38, stdev=1078.54 00:21:27.347 clat percentiles (usec): 00:21:27.347 | 1.00th=[ 1926], 5.00th=[ 2008], 10.00th=[ 2212], 20.00th=[ 2540], 00:21:27.347 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3163], 60.00th=[ 4490], 00:21:27.347 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 4948], 00:21:27.347 | 99.00th=[ 5145], 99.50th=[ 5145], 99.90th=[ 5735], 99.95th=[ 6390], 00:21:27.347 | 99.99th=[ 6980] 00:21:27.347 bw ( KiB/s): min=16864, max=18544, per=27.31%, avg=17496.00, stdev=586.66, samples=9 00:21:27.347 iops : min= 2108, max= 2318, avg=2186.89, stdev=73.42, samples=9 00:21:27.347 lat (msec) : 2=3.95%, 4=51.52%, 10=44.53% 00:21:27.347 cpu : usr=93.32%, sys=5.60%, ctx=10, majf=0, minf=9 00:21:27.347 IO depths : 1=0.1%, 2=0.5%, 4=63.4%, 8=35.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:27.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 issued rwts: total=11000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.347 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:27.347 filename1: (groupid=0, jobs=1): err= 0: pid=83895: Mon Jul 15 22:47:45 2024 00:21:27.347 read: IOPS=2215, BW=17.3MiB/s (18.2MB/s)(86.6MiB/5001msec) 00:21:27.347 slat (nsec): min=7621, max=73159, avg=18617.10, stdev=8191.17 00:21:27.347 clat (usec): min=1026, max=7028, avg=3566.23, stdev=1080.84 00:21:27.347 lat (usec): min=1034, max=7066, avg=3584.85, stdev=1079.87 00:21:27.347 clat percentiles (usec): 00:21:27.347 | 1.00th=[ 1778], 5.00th=[ 2024], 10.00th=[ 2180], 20.00th=[ 2507], 00:21:27.347 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 3130], 60.00th=[ 4424], 00:21:27.347 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 4948], 00:21:27.347 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 6325], 00:21:27.347 | 99.99th=[ 6521] 00:21:27.347 bw ( KiB/s): min=16928, max=18544, per=27.52%, avg=17630.11, stdev=554.70, samples=9 00:21:27.347 iops : min= 2116, max= 2318, avg=2203.67, stdev=69.42, samples=9 00:21:27.347 lat (msec) : 2=3.68%, 4=52.98%, 10=43.34% 00:21:27.347 cpu : usr=93.80%, sys=5.16%, ctx=7, majf=0, minf=9 00:21:27.347 IO depths : 1=0.1%, 2=0.3%, 4=63.7%, 8=36.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:27.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.347 issued rwts: total=11081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.347 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:27.347 00:21:27.347 Run status group 0 (all jobs): 00:21:27.347 READ: bw=62.6MiB/s (65.6MB/s), 13.6MiB/s-17.3MiB/s (14.2MB/s-18.2MB/s), io=313MiB (328MB), run=5001-5003msec 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.605 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:27.605 ************************************ 00:21:27.605 END TEST fio_dif_rand_params 00:21:27.605 ************************************ 00:21:27.606 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.606 00:21:27.606 real 0m23.746s 00:21:27.606 user 2m5.812s 00:21:27.606 sys 0m6.004s 00:21:27.606 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.606 22:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 22:47:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:27.864 22:47:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:27.864 22:47:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:27.864 22:47:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.864 22:47:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 ************************************ 00:21:27.864 START TEST fio_dif_digest 00:21:27.864 ************************************ 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 bdev_null0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.864 [2024-07-15 22:47:45.491615] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.864 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.864 { 00:21:27.864 "params": { 00:21:27.864 "name": "Nvme$subsystem", 00:21:27.864 "trtype": "$TEST_TRANSPORT", 00:21:27.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.864 "adrfam": "ipv4", 00:21:27.864 "trsvcid": "$NVMF_PORT", 00:21:27.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.865 "hdgst": ${hdgst:-false}, 00:21:27.865 "ddgst": ${ddgst:-false} 00:21:27.865 }, 00:21:27.865 "method": "bdev_nvme_attach_controller" 00:21:27.865 } 00:21:27.865 EOF 00:21:27.865 )") 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:27.865 "params": { 00:21:27.865 "name": "Nvme0", 00:21:27.865 "trtype": "tcp", 00:21:27.865 "traddr": "10.0.0.2", 00:21:27.865 "adrfam": "ipv4", 00:21:27.865 "trsvcid": "4420", 00:21:27.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:27.865 "hdgst": true, 00:21:27.865 "ddgst": true 00:21:27.865 }, 00:21:27.865 "method": "bdev_nvme_attach_controller" 00:21:27.865 }' 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:27.865 22:47:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.865 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:27.865 ... 00:21:27.865 fio-3.35 00:21:27.865 Starting 3 threads 00:21:40.134 00:21:40.134 filename0: (groupid=0, jobs=1): err= 0: pid=84002: Mon Jul 15 22:47:56 2024 00:21:40.134 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(274MiB/10006msec) 00:21:40.134 slat (nsec): min=8047, max=90705, avg=29231.58, stdev=14406.60 00:21:40.134 clat (usec): min=12341, max=15125, avg=13621.28, stdev=190.92 00:21:40.134 lat (usec): min=12351, max=15162, avg=13650.51, stdev=192.06 00:21:40.134 clat percentiles (usec): 00:21:40.134 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:21:40.134 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:21:40.134 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:40.134 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15008], 99.95th=[15139], 00:21:40.134 | 99.99th=[15139] 00:21:40.134 bw ( KiB/s): min=27648, max=28416, per=33.31%, avg=28034.75, stdev=391.34, samples=20 00:21:40.134 iops : min= 216, max= 222, avg=219.00, stdev= 3.08, samples=20 00:21:40.134 lat (msec) : 20=100.00% 00:21:40.134 cpu : usr=95.77%, sys=3.61%, ctx=91, majf=0, minf=0 00:21:40.134 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.134 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:40.134 filename0: (groupid=0, jobs=1): err= 0: pid=84003: Mon Jul 15 22:47:56 2024 00:21:40.134 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(274MiB/10007msec) 00:21:40.134 slat (nsec): min=7214, max=79671, avg=22945.08, stdev=11505.79 00:21:40.134 clat (usec): min=13200, max=16688, avg=13640.44, stdev=199.50 00:21:40.134 lat (usec): min=13232, max=16726, avg=13663.38, stdev=200.28 00:21:40.134 clat percentiles (usec): 00:21:40.134 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13566], 00:21:40.134 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:21:40.134 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:40.134 | 99.00th=[14091], 99.50th=[14484], 99.90th=[16712], 99.95th=[16712], 00:21:40.134 | 99.99th=[16712] 00:21:40.134 bw ( KiB/s): min=27648, max=28416, per=33.31%, avg=28032.00, stdev=393.98, samples=20 00:21:40.134 iops : min= 216, max= 222, avg=219.00, stdev= 3.08, samples=20 00:21:40.134 lat (msec) : 20=100.00% 00:21:40.134 cpu : usr=93.88%, sys=5.59%, ctx=12, majf=0, minf=0 00:21:40.134 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.134 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:40.134 filename0: (groupid=0, jobs=1): err= 0: pid=84004: Mon Jul 15 22:47:56 2024 00:21:40.134 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(274MiB/10005msec) 00:21:40.134 slat (nsec): min=8075, max=90705, avg=29624.99, stdev=13974.12 00:21:40.134 clat (usec): min=13192, max=15009, avg=13617.34, stdev=169.25 00:21:40.134 lat (usec): min=13226, max=15037, avg=13646.96, stdev=170.66 00:21:40.134 clat percentiles (usec): 00:21:40.134 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:21:40.134 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:21:40.134 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:40.134 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15008], 99.95th=[15008], 00:21:40.134 | 99.99th=[15008] 00:21:40.134 bw ( KiB/s): min=27648, max=28416, per=33.31%, avg=28032.00, stdev=393.98, samples=20 00:21:40.134 iops : min= 216, max= 222, avg=219.00, stdev= 3.08, samples=20 00:21:40.134 lat (msec) : 20=100.00% 00:21:40.134 cpu : usr=95.51%, sys=3.84%, ctx=25, majf=0, minf=9 00:21:40.134 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.134 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.134 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:40.134 00:21:40.134 Run status group 0 (all jobs): 00:21:40.134 READ: bw=82.2MiB/s (86.2MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=822MiB (862MB), run=10005-10007msec 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 ************************************ 00:21:40.134 END TEST fio_dif_digest 00:21:40.134 ************************************ 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.134 00:21:40.134 real 0m11.007s 00:21:40.134 user 0m29.163s 00:21:40.134 sys 0m1.573s 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.134 22:47:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 22:47:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:40.134 22:47:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:40.134 22:47:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.134 rmmod nvme_tcp 00:21:40.134 rmmod nvme_fabrics 00:21:40.134 rmmod nvme_keyring 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:40.134 22:47:56 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83246 ']' 00:21:40.135 22:47:56 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83246 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83246 ']' 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83246 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83246 00:21:40.135 killing process with pid 83246 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83246' 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83246 00:21:40.135 22:47:56 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83246 00:21:40.135 22:47:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:40.135 22:47:56 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:40.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.135 Waiting for block devices as requested 00:21:40.135 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.135 22:47:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:40.135 22:47:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.135 22:47:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:40.135 ************************************ 00:21:40.135 END TEST nvmf_dif 00:21:40.135 ************************************ 00:21:40.135 00:21:40.135 real 0m59.996s 00:21:40.135 user 3m51.155s 00:21:40.135 sys 0m16.440s 00:21:40.135 22:47:57 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.135 22:47:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:40.135 22:47:57 -- common/autotest_common.sh@1142 -- # return 0 00:21:40.135 22:47:57 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:40.135 22:47:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.135 22:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.135 22:47:57 -- common/autotest_common.sh@10 -- # set +x 00:21:40.135 ************************************ 00:21:40.135 START TEST nvmf_abort_qd_sizes 00:21:40.135 ************************************ 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:40.135 * Looking for test storage... 00:21:40.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:40.135 Cannot find device "nvmf_tgt_br" 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.135 Cannot find device "nvmf_tgt_br2" 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:40.135 Cannot find device "nvmf_tgt_br" 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:40.135 Cannot find device "nvmf_tgt_br2" 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.135 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:40.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:21:40.136 00:21:40.136 --- 10.0.0.2 ping statistics --- 00:21:40.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.136 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:40.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:21:40.136 00:21:40.136 --- 10.0.0.3 ping statistics --- 00:21:40.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.136 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:40.136 00:21:40.136 --- 10.0.0.1 ping statistics --- 00:21:40.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.136 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:40.136 22:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:41.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:41.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:41.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84598 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84598 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84598 ']' 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.070 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:41.070 [2024-07-15 22:47:58.902079] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:41.070 [2024-07-15 22:47:58.902406] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.328 [2024-07-15 22:47:59.043525] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.587 [2024-07-15 22:47:59.167924] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.587 [2024-07-15 22:47:59.168215] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.587 [2024-07-15 22:47:59.168403] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.587 [2024-07-15 22:47:59.168471] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.587 [2024-07-15 22:47:59.168585] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.587 [2024-07-15 22:47:59.168806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.587 [2024-07-15 22:47:59.168922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.587 [2024-07-15 22:47:59.169597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.587 [2024-07-15 22:47:59.169650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.587 [2024-07-15 22:47:59.228374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:42.154 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:42.155 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.413 22:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 ************************************ 00:21:42.413 START TEST spdk_target_abort 00:21:42.413 ************************************ 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 spdk_targetn1 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 [2024-07-15 22:48:00.094650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.413 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.414 [2024-07-15 22:48:00.126893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:42.414 22:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.759 Initializing NVMe Controllers 00:21:45.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:45.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:45.759 Initialization complete. Launching workers. 00:21:45.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10371, failed: 0 00:21:45.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1053, failed to submit 9318 00:21:45.759 success 692, unsuccess 361, failed 0 00:21:45.759 22:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:45.759 22:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:49.039 Initializing NVMe Controllers 00:21:49.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:49.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:49.039 Initialization complete. Launching workers. 00:21:49.039 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:21:49.039 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1143, failed to submit 7809 00:21:49.039 success 402, unsuccess 741, failed 0 00:21:49.039 22:48:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:49.039 22:48:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:52.323 Initializing NVMe Controllers 00:21:52.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:52.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:52.323 Initialization complete. Launching workers. 00:21:52.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30937, failed: 0 00:21:52.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2233, failed to submit 28704 00:21:52.323 success 461, unsuccess 1772, failed 0 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.323 22:48:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84598 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84598 ']' 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84598 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84598 00:21:52.889 killing process with pid 84598 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84598' 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84598 00:21:52.889 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84598 00:21:53.148 ************************************ 00:21:53.148 END TEST spdk_target_abort 00:21:53.148 ************************************ 00:21:53.148 00:21:53.148 real 0m10.771s 00:21:53.148 user 0m43.861s 00:21:53.148 sys 0m1.967s 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:53.148 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:53.148 22:48:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:53.148 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:53.148 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.148 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:53.148 ************************************ 00:21:53.148 START TEST kernel_target_abort 00:21:53.148 ************************************ 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:53.148 22:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:53.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.406 Waiting for block devices as requested 00:21:53.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:53.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:53.664 No valid GPT data, bailing 00:21:53.664 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:53.922 No valid GPT data, bailing 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:53.922 No valid GPT data, bailing 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:53.922 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:53.923 No valid GPT data, bailing 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:53.923 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 --hostid=d591d0cc-2041-4f11-80f5-97d971e06385 -a 10.0.0.1 -t tcp -s 4420 00:21:54.181 00:21:54.181 Discovery Log Number of Records 2, Generation counter 2 00:21:54.181 =====Discovery Log Entry 0====== 00:21:54.181 trtype: tcp 00:21:54.181 adrfam: ipv4 00:21:54.181 subtype: current discovery subsystem 00:21:54.181 treq: not specified, sq flow control disable supported 00:21:54.181 portid: 1 00:21:54.181 trsvcid: 4420 00:21:54.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:54.181 traddr: 10.0.0.1 00:21:54.181 eflags: none 00:21:54.181 sectype: none 00:21:54.181 =====Discovery Log Entry 1====== 00:21:54.181 trtype: tcp 00:21:54.181 adrfam: ipv4 00:21:54.181 subtype: nvme subsystem 00:21:54.181 treq: not specified, sq flow control disable supported 00:21:54.181 portid: 1 00:21:54.181 trsvcid: 4420 00:21:54.181 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:54.181 traddr: 10.0.0.1 00:21:54.181 eflags: none 00:21:54.181 sectype: none 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:54.181 22:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:57.482 Initializing NVMe Controllers 00:21:57.482 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:57.482 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:57.482 Initialization complete. Launching workers. 00:21:57.482 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31568, failed: 0 00:21:57.482 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31568, failed to submit 0 00:21:57.482 success 0, unsuccess 31568, failed 0 00:21:57.482 22:48:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:57.482 22:48:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:00.798 Initializing NVMe Controllers 00:22:00.798 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:00.798 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:00.798 Initialization complete. Launching workers. 00:22:00.798 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68279, failed: 0 00:22:00.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29362, failed to submit 38917 00:22:00.799 success 0, unsuccess 29362, failed 0 00:22:00.799 22:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:00.799 22:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:04.105 Initializing NVMe Controllers 00:22:04.105 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:04.105 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:04.105 Initialization complete. Launching workers. 00:22:04.105 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85740, failed: 0 00:22:04.105 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21410, failed to submit 64330 00:22:04.105 success 0, unsuccess 21410, failed 0 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:04.105 22:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:04.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.279 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:06.279 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:06.279 00:22:06.279 real 0m13.182s 00:22:06.279 user 0m6.091s 00:22:06.279 sys 0m4.405s 00:22:06.279 ************************************ 00:22:06.279 END TEST kernel_target_abort 00:22:06.279 ************************************ 00:22:06.279 22:48:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.279 22:48:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.279 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.279 rmmod nvme_tcp 00:22:06.537 rmmod nvme_fabrics 00:22:06.537 rmmod nvme_keyring 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.537 Process with pid 84598 is not found 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84598 ']' 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84598 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84598 ']' 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84598 00:22:06.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84598) - No such process 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84598 is not found' 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:22:06.537 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:06.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.794 Waiting for block devices as requested 00:22:06.794 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:07.052 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:07.052 ************************************ 00:22:07.052 END TEST nvmf_abort_qd_sizes 00:22:07.052 ************************************ 00:22:07.052 00:22:07.052 real 0m27.301s 00:22:07.052 user 0m51.168s 00:22:07.052 sys 0m7.800s 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.052 22:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:07.052 22:48:24 -- common/autotest_common.sh@1142 -- # return 0 00:22:07.052 22:48:24 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:07.052 22:48:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.052 22:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.052 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:22:07.052 ************************************ 00:22:07.052 START TEST keyring_file 00:22:07.052 ************************************ 00:22:07.052 22:48:24 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:07.308 * Looking for test storage... 00:22:07.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:07.308 22:48:24 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:07.308 22:48:24 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.308 22:48:24 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.308 22:48:24 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.308 22:48:24 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.308 22:48:24 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.308 22:48:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.308 22:48:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.308 22:48:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.308 22:48:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:07.309 22:48:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@47 -- # : 0 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:07.309 22:48:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DA8N1Xmc56 00:22:07.309 22:48:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:07.309 22:48:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DA8N1Xmc56 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DA8N1Xmc56 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DA8N1Xmc56 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dgyXL5Zg8e 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:07.309 22:48:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dgyXL5Zg8e 00:22:07.309 22:48:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dgyXL5Zg8e 00:22:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dgyXL5Zg8e 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=85465 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:07.309 22:48:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85465 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85465 ']' 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.309 22:48:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:07.566 [2024-07-15 22:48:25.158796] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:07.566 [2024-07-15 22:48:25.159163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85465 ] 00:22:07.566 [2024-07-15 22:48:25.300985] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.823 [2024-07-15 22:48:25.428758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.823 [2024-07-15 22:48:25.485500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:08.389 22:48:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.389 [2024-07-15 22:48:26.158776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.389 null0 00:22:08.389 [2024-07-15 22:48:26.190764] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.389 [2024-07-15 22:48:26.191183] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:08.389 [2024-07-15 22:48:26.198771] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.389 22:48:26 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.389 22:48:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.390 [2024-07-15 22:48:26.210737] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:08.390 request: 00:22:08.390 { 00:22:08.390 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.390 "secure_channel": false, 00:22:08.390 "listen_address": { 00:22:08.390 "trtype": "tcp", 00:22:08.390 "traddr": "127.0.0.1", 00:22:08.390 "trsvcid": "4420" 00:22:08.390 }, 00:22:08.390 "method": "nvmf_subsystem_add_listener", 00:22:08.390 "req_id": 1 00:22:08.390 } 00:22:08.390 Got JSON-RPC error response 00:22:08.390 response: 00:22:08.390 { 00:22:08.390 "code": -32602, 00:22:08.390 "message": "Invalid parameters" 00:22:08.390 } 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.390 22:48:26 keyring_file -- keyring/file.sh@46 -- # bperfpid=85482 00:22:08.390 22:48:26 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:08.390 22:48:26 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85482 /var/tmp/bperf.sock 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85482 ']' 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.390 22:48:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.648 [2024-07-15 22:48:26.270602] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:08.648 [2024-07-15 22:48:26.270930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85482 ] 00:22:08.648 [2024-07-15 22:48:26.405309] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.906 [2024-07-15 22:48:26.571633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.906 [2024-07-15 22:48:26.632026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:09.472 22:48:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.472 22:48:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:09.472 22:48:27 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:09.472 22:48:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:09.730 22:48:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dgyXL5Zg8e 00:22:09.730 22:48:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dgyXL5Zg8e 00:22:09.988 22:48:27 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:22:09.988 22:48:27 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:22:09.988 22:48:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.988 22:48:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.988 22:48:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.246 22:48:27 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DA8N1Xmc56 == \/\t\m\p\/\t\m\p\.\D\A\8\N\1\X\m\c\5\6 ]] 00:22:10.246 22:48:27 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:22:10.246 22:48:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:10.246 22:48:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.246 22:48:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:10.246 22:48:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.504 22:48:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dgyXL5Zg8e == \/\t\m\p\/\t\m\p\.\d\g\y\X\L\5\Z\g\8\e ]] 00:22:10.504 22:48:28 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:22:10.504 22:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.504 22:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.504 22:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.504 22:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.504 22:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.762 22:48:28 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:22:10.762 22:48:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:22:10.762 22:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:10.762 22:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.762 22:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.762 22:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.762 22:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:11.097 22:48:28 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:11.097 22:48:28 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:11.097 22:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:11.356 [2024-07-15 22:48:28.995049] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.356 nvme0n1 00:22:11.356 22:48:29 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:22:11.356 22:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:11.356 22:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.356 22:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.356 22:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.356 22:48:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.614 22:48:29 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:22:11.614 22:48:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:22:11.614 22:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.614 22:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:11.614 22:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.614 22:48:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.614 22:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:11.872 22:48:29 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:22:11.873 22:48:29 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:12.131 Running I/O for 1 seconds... 00:22:13.066 00:22:13.066 Latency(us) 00:22:13.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.067 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:13.067 nvme0n1 : 1.01 9066.18 35.41 0.00 0.00 14071.64 5570.56 253564.74 00:22:13.067 =================================================================================================================== 00:22:13.067 Total : 9066.18 35.41 0.00 0.00 14071.64 5570.56 253564.74 00:22:13.067 0 00:22:13.067 22:48:30 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:13.067 22:48:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:13.324 22:48:30 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:22:13.324 22:48:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:13.324 22:48:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.324 22:48:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.324 22:48:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.324 22:48:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:13.582 22:48:31 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:22:13.582 22:48:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:22:13.582 22:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.582 22:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:13.582 22:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.582 22:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.582 22:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:13.840 22:48:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:13.840 22:48:31 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.840 22:48:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.840 22:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:14.098 [2024-07-15 22:48:31.714841] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.098 [2024-07-15 22:48:31.715050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898710 (107): Transport endpoint is not connected 00:22:14.098 [2024-07-15 22:48:31.716038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898710 (9): Bad file descriptor 00:22:14.098 [2024-07-15 22:48:31.717034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.098 [2024-07-15 22:48:31.717057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:14.098 [2024-07-15 22:48:31.717068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.098 request: 00:22:14.098 { 00:22:14.098 "name": "nvme0", 00:22:14.098 "trtype": "tcp", 00:22:14.098 "traddr": "127.0.0.1", 00:22:14.098 "adrfam": "ipv4", 00:22:14.098 "trsvcid": "4420", 00:22:14.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.098 "prchk_reftag": false, 00:22:14.098 "prchk_guard": false, 00:22:14.098 "hdgst": false, 00:22:14.098 "ddgst": false, 00:22:14.098 "psk": "key1", 00:22:14.098 "method": "bdev_nvme_attach_controller", 00:22:14.098 "req_id": 1 00:22:14.098 } 00:22:14.098 Got JSON-RPC error response 00:22:14.098 response: 00:22:14.098 { 00:22:14.098 "code": -5, 00:22:14.098 "message": "Input/output error" 00:22:14.098 } 00:22:14.098 22:48:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:14.098 22:48:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.098 22:48:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.098 22:48:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.098 22:48:31 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:22:14.098 22:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:14.098 22:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.098 22:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.098 22:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.098 22:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.357 22:48:32 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:22:14.357 22:48:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:22:14.357 22:48:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.357 22:48:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:14.357 22:48:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.357 22:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.357 22:48:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:14.615 22:48:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:14.615 22:48:32 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:22:14.615 22:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:14.873 22:48:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:22:14.873 22:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:15.131 22:48:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:22:15.131 22:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.131 22:48:32 keyring_file -- keyring/file.sh@77 -- # jq length 00:22:15.442 22:48:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:22:15.442 22:48:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DA8N1Xmc56 00:22:15.442 22:48:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.442 22:48:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.442 22:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.442 [2024-07-15 22:48:33.263274] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DA8N1Xmc56': 0100660 00:22:15.442 [2024-07-15 22:48:33.263326] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:15.442 request: 00:22:15.442 { 00:22:15.442 "name": "key0", 00:22:15.442 "path": "/tmp/tmp.DA8N1Xmc56", 00:22:15.442 "method": "keyring_file_add_key", 00:22:15.442 "req_id": 1 00:22:15.442 } 00:22:15.442 Got JSON-RPC error response 00:22:15.442 response: 00:22:15.442 { 00:22:15.442 "code": -1, 00:22:15.442 "message": "Operation not permitted" 00:22:15.442 } 00:22:15.700 22:48:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:15.700 22:48:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.700 22:48:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.700 22:48:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.700 22:48:33 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DA8N1Xmc56 00:22:15.700 22:48:33 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.700 22:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DA8N1Xmc56 00:22:15.700 22:48:33 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DA8N1Xmc56 00:22:15.958 22:48:33 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:15.958 22:48:33 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:22:15.958 22:48:33 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.958 22:48:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.958 22:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.216 [2024-07-15 22:48:33.971459] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DA8N1Xmc56': No such file or directory 00:22:16.216 [2024-07-15 22:48:33.971511] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:16.216 [2024-07-15 22:48:33.971537] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:16.216 [2024-07-15 22:48:33.971547] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:16.216 [2024-07-15 22:48:33.971556] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:16.216 request: 00:22:16.216 { 00:22:16.216 "name": "nvme0", 00:22:16.216 "trtype": "tcp", 00:22:16.216 "traddr": "127.0.0.1", 00:22:16.216 "adrfam": "ipv4", 00:22:16.216 "trsvcid": "4420", 00:22:16.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:16.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:16.216 "prchk_reftag": false, 00:22:16.216 "prchk_guard": false, 00:22:16.216 "hdgst": false, 00:22:16.216 "ddgst": false, 00:22:16.216 "psk": "key0", 00:22:16.216 "method": "bdev_nvme_attach_controller", 00:22:16.216 "req_id": 1 00:22:16.216 } 00:22:16.216 Got JSON-RPC error response 00:22:16.216 response: 00:22:16.216 { 00:22:16.216 "code": -19, 00:22:16.216 "message": "No such device" 00:22:16.216 } 00:22:16.216 22:48:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:16.216 22:48:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.216 22:48:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.216 22:48:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.216 22:48:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:22:16.216 22:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:16.474 22:48:34 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jivzkKqT0d 00:22:16.474 22:48:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:16.474 22:48:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:16.733 22:48:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jivzkKqT0d 00:22:16.733 22:48:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jivzkKqT0d 00:22:16.733 22:48:34 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.jivzkKqT0d 00:22:16.733 22:48:34 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jivzkKqT0d 00:22:16.733 22:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jivzkKqT0d 00:22:16.733 22:48:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.733 22:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:17.300 nvme0n1 00:22:17.300 22:48:34 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:17.300 22:48:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:17.300 22:48:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:17.300 22:48:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:17.300 22:48:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.300 22:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.300 22:48:35 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:17.300 22:48:35 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:17.300 22:48:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:17.558 22:48:35 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:17.558 22:48:35 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:17.558 22:48:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:17.558 22:48:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.558 22:48:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.816 22:48:35 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:17.816 22:48:35 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:17.816 22:48:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:17.816 22:48:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:17.816 22:48:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:17.816 22:48:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.816 22:48:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.073 22:48:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:18.073 22:48:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:18.073 22:48:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:18.332 22:48:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:18.332 22:48:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.332 22:48:36 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:18.589 22:48:36 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:18.589 22:48:36 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jivzkKqT0d 00:22:18.589 22:48:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jivzkKqT0d 00:22:18.847 22:48:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dgyXL5Zg8e 00:22:18.847 22:48:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dgyXL5Zg8e 00:22:19.107 22:48:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:19.107 22:48:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:19.365 nvme0n1 00:22:19.365 22:48:37 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:19.365 22:48:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:19.623 22:48:37 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:19.623 "subsystems": [ 00:22:19.623 { 00:22:19.623 "subsystem": "keyring", 00:22:19.623 "config": [ 00:22:19.623 { 00:22:19.623 "method": "keyring_file_add_key", 00:22:19.623 "params": { 00:22:19.623 "name": "key0", 00:22:19.623 "path": "/tmp/tmp.jivzkKqT0d" 00:22:19.623 } 00:22:19.623 }, 00:22:19.623 { 00:22:19.623 "method": "keyring_file_add_key", 00:22:19.623 "params": { 00:22:19.623 "name": "key1", 00:22:19.623 "path": "/tmp/tmp.dgyXL5Zg8e" 00:22:19.624 } 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "iobuf", 00:22:19.624 "config": [ 00:22:19.624 { 00:22:19.624 "method": "iobuf_set_options", 00:22:19.624 "params": { 00:22:19.624 "small_pool_count": 8192, 00:22:19.624 "large_pool_count": 1024, 00:22:19.624 "small_bufsize": 8192, 00:22:19.624 "large_bufsize": 135168 00:22:19.624 } 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "sock", 00:22:19.624 "config": [ 00:22:19.624 { 00:22:19.624 "method": "sock_set_default_impl", 00:22:19.624 "params": { 00:22:19.624 "impl_name": "uring" 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "sock_impl_set_options", 00:22:19.624 "params": { 00:22:19.624 "impl_name": "ssl", 00:22:19.624 "recv_buf_size": 4096, 00:22:19.624 "send_buf_size": 4096, 00:22:19.624 "enable_recv_pipe": true, 00:22:19.624 "enable_quickack": false, 00:22:19.624 "enable_placement_id": 0, 00:22:19.624 "enable_zerocopy_send_server": true, 00:22:19.624 "enable_zerocopy_send_client": false, 00:22:19.624 "zerocopy_threshold": 0, 00:22:19.624 "tls_version": 0, 00:22:19.624 "enable_ktls": false 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "sock_impl_set_options", 00:22:19.624 "params": { 00:22:19.624 "impl_name": "posix", 00:22:19.624 "recv_buf_size": 2097152, 00:22:19.624 "send_buf_size": 2097152, 00:22:19.624 "enable_recv_pipe": true, 00:22:19.624 "enable_quickack": false, 00:22:19.624 "enable_placement_id": 0, 00:22:19.624 "enable_zerocopy_send_server": true, 00:22:19.624 "enable_zerocopy_send_client": false, 00:22:19.624 "zerocopy_threshold": 0, 00:22:19.624 "tls_version": 0, 00:22:19.624 "enable_ktls": false 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "sock_impl_set_options", 00:22:19.624 "params": { 00:22:19.624 "impl_name": "uring", 00:22:19.624 "recv_buf_size": 2097152, 00:22:19.624 "send_buf_size": 2097152, 00:22:19.624 "enable_recv_pipe": true, 00:22:19.624 "enable_quickack": false, 00:22:19.624 "enable_placement_id": 0, 00:22:19.624 "enable_zerocopy_send_server": false, 00:22:19.624 "enable_zerocopy_send_client": false, 00:22:19.624 "zerocopy_threshold": 0, 00:22:19.624 "tls_version": 0, 00:22:19.624 "enable_ktls": false 00:22:19.624 } 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "vmd", 00:22:19.624 "config": [] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "accel", 00:22:19.624 "config": [ 00:22:19.624 { 00:22:19.624 "method": "accel_set_options", 00:22:19.624 "params": { 00:22:19.624 "small_cache_size": 128, 00:22:19.624 "large_cache_size": 16, 00:22:19.624 "task_count": 2048, 00:22:19.624 "sequence_count": 2048, 00:22:19.624 "buf_count": 2048 00:22:19.624 } 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "bdev", 00:22:19.624 "config": [ 00:22:19.624 { 00:22:19.624 "method": "bdev_set_options", 00:22:19.624 "params": { 00:22:19.624 "bdev_io_pool_size": 65535, 00:22:19.624 "bdev_io_cache_size": 256, 00:22:19.624 "bdev_auto_examine": true, 00:22:19.624 "iobuf_small_cache_size": 128, 00:22:19.624 "iobuf_large_cache_size": 16 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_raid_set_options", 00:22:19.624 "params": { 00:22:19.624 "process_window_size_kb": 1024 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_iscsi_set_options", 00:22:19.624 "params": { 00:22:19.624 "timeout_sec": 30 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_nvme_set_options", 00:22:19.624 "params": { 00:22:19.624 "action_on_timeout": "none", 00:22:19.624 "timeout_us": 0, 00:22:19.624 "timeout_admin_us": 0, 00:22:19.624 "keep_alive_timeout_ms": 10000, 00:22:19.624 "arbitration_burst": 0, 00:22:19.624 "low_priority_weight": 0, 00:22:19.624 "medium_priority_weight": 0, 00:22:19.624 "high_priority_weight": 0, 00:22:19.624 "nvme_adminq_poll_period_us": 10000, 00:22:19.624 "nvme_ioq_poll_period_us": 0, 00:22:19.624 "io_queue_requests": 512, 00:22:19.624 "delay_cmd_submit": true, 00:22:19.624 "transport_retry_count": 4, 00:22:19.624 "bdev_retry_count": 3, 00:22:19.624 "transport_ack_timeout": 0, 00:22:19.624 "ctrlr_loss_timeout_sec": 0, 00:22:19.624 "reconnect_delay_sec": 0, 00:22:19.624 "fast_io_fail_timeout_sec": 0, 00:22:19.624 "disable_auto_failback": false, 00:22:19.624 "generate_uuids": false, 00:22:19.624 "transport_tos": 0, 00:22:19.624 "nvme_error_stat": false, 00:22:19.624 "rdma_srq_size": 0, 00:22:19.624 "io_path_stat": false, 00:22:19.624 "allow_accel_sequence": false, 00:22:19.624 "rdma_max_cq_size": 0, 00:22:19.624 "rdma_cm_event_timeout_ms": 0, 00:22:19.624 "dhchap_digests": [ 00:22:19.624 "sha256", 00:22:19.624 "sha384", 00:22:19.624 "sha512" 00:22:19.624 ], 00:22:19.624 "dhchap_dhgroups": [ 00:22:19.624 "null", 00:22:19.624 "ffdhe2048", 00:22:19.624 "ffdhe3072", 00:22:19.624 "ffdhe4096", 00:22:19.624 "ffdhe6144", 00:22:19.624 "ffdhe8192" 00:22:19.624 ] 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_nvme_attach_controller", 00:22:19.624 "params": { 00:22:19.624 "name": "nvme0", 00:22:19.624 "trtype": "TCP", 00:22:19.624 "adrfam": "IPv4", 00:22:19.624 "traddr": "127.0.0.1", 00:22:19.624 "trsvcid": "4420", 00:22:19.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:19.624 "prchk_reftag": false, 00:22:19.624 "prchk_guard": false, 00:22:19.624 "ctrlr_loss_timeout_sec": 0, 00:22:19.624 "reconnect_delay_sec": 0, 00:22:19.624 "fast_io_fail_timeout_sec": 0, 00:22:19.624 "psk": "key0", 00:22:19.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:19.624 "hdgst": false, 00:22:19.624 "ddgst": false 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_nvme_set_hotplug", 00:22:19.624 "params": { 00:22:19.624 "period_us": 100000, 00:22:19.624 "enable": false 00:22:19.624 } 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "method": "bdev_wait_for_examine" 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }, 00:22:19.624 { 00:22:19.624 "subsystem": "nbd", 00:22:19.624 "config": [] 00:22:19.624 } 00:22:19.624 ] 00:22:19.624 }' 00:22:19.624 22:48:37 keyring_file -- keyring/file.sh@114 -- # killprocess 85482 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85482 ']' 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85482 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85482 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85482' 00:22:19.624 killing process with pid 85482 00:22:19.624 Received shutdown signal, test time was about 1.000000 seconds 00:22:19.624 00:22:19.624 Latency(us) 00:22:19.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.624 =================================================================================================================== 00:22:19.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@967 -- # kill 85482 00:22:19.624 22:48:37 keyring_file -- common/autotest_common.sh@972 -- # wait 85482 00:22:19.883 22:48:37 keyring_file -- keyring/file.sh@117 -- # bperfpid=85727 00:22:19.883 22:48:37 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85727 /var/tmp/bperf.sock 00:22:19.883 22:48:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85727 ']' 00:22:19.883 22:48:37 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:19.883 "subsystems": [ 00:22:19.883 { 00:22:19.883 "subsystem": "keyring", 00:22:19.883 "config": [ 00:22:19.883 { 00:22:19.883 "method": "keyring_file_add_key", 00:22:19.883 "params": { 00:22:19.883 "name": "key0", 00:22:19.883 "path": "/tmp/tmp.jivzkKqT0d" 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "keyring_file_add_key", 00:22:19.883 "params": { 00:22:19.883 "name": "key1", 00:22:19.883 "path": "/tmp/tmp.dgyXL5Zg8e" 00:22:19.883 } 00:22:19.883 } 00:22:19.883 ] 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "subsystem": "iobuf", 00:22:19.883 "config": [ 00:22:19.883 { 00:22:19.883 "method": "iobuf_set_options", 00:22:19.883 "params": { 00:22:19.883 "small_pool_count": 8192, 00:22:19.883 "large_pool_count": 1024, 00:22:19.883 "small_bufsize": 8192, 00:22:19.883 "large_bufsize": 135168 00:22:19.883 } 00:22:19.883 } 00:22:19.883 ] 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "subsystem": "sock", 00:22:19.883 "config": [ 00:22:19.883 { 00:22:19.883 "method": "sock_set_default_impl", 00:22:19.883 "params": { 00:22:19.883 "impl_name": "uring" 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "sock_impl_set_options", 00:22:19.883 "params": { 00:22:19.883 "impl_name": "ssl", 00:22:19.883 "recv_buf_size": 4096, 00:22:19.883 "send_buf_size": 4096, 00:22:19.883 "enable_recv_pipe": true, 00:22:19.883 "enable_quickack": false, 00:22:19.883 "enable_placement_id": 0, 00:22:19.883 "enable_zerocopy_send_server": true, 00:22:19.883 "enable_zerocopy_send_client": false, 00:22:19.883 "zerocopy_threshold": 0, 00:22:19.883 "tls_version": 0, 00:22:19.883 "enable_ktls": false 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "sock_impl_set_options", 00:22:19.883 "params": { 00:22:19.883 "impl_name": "posix", 00:22:19.883 "recv_buf_size": 2097152, 00:22:19.883 "send_buf_size": 2097152, 00:22:19.883 "enable_recv_pipe": true, 00:22:19.883 "enable_quickack": false, 00:22:19.883 "enable_placement_id": 0, 00:22:19.883 "enable_zerocopy_send_server": true, 00:22:19.883 "enable_zerocopy_send_client": false, 00:22:19.883 "zerocopy_threshold": 0, 00:22:19.883 "tls_version": 0, 00:22:19.883 "enable_ktls": false 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "sock_impl_set_options", 00:22:19.883 "params": { 00:22:19.883 "impl_name": "uring", 00:22:19.883 "recv_buf_size": 2097152, 00:22:19.883 "send_buf_size": 2097152, 00:22:19.883 "enable_recv_pipe": true, 00:22:19.883 "enable_quickack": false, 00:22:19.883 "enable_placement_id": 0, 00:22:19.883 "enable_zerocopy_send_server": false, 00:22:19.883 "enable_zerocopy_send_client": false, 00:22:19.883 "zerocopy_threshold": 0, 00:22:19.883 "tls_version": 0, 00:22:19.883 "enable_ktls": false 00:22:19.883 } 00:22:19.883 } 00:22:19.883 ] 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "subsystem": "vmd", 00:22:19.883 "config": [] 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "subsystem": "accel", 00:22:19.883 "config": [ 00:22:19.883 { 00:22:19.883 "method": "accel_set_options", 00:22:19.883 "params": { 00:22:19.883 "small_cache_size": 128, 00:22:19.883 "large_cache_size": 16, 00:22:19.883 "task_count": 2048, 00:22:19.883 "sequence_count": 2048, 00:22:19.883 "buf_count": 2048 00:22:19.883 } 00:22:19.883 } 00:22:19.883 ] 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "subsystem": "bdev", 00:22:19.883 "config": [ 00:22:19.883 { 00:22:19.883 "method": "bdev_set_options", 00:22:19.883 "params": { 00:22:19.883 "bdev_io_pool_size": 65535, 00:22:19.883 "bdev_io_cache_size": 256, 00:22:19.883 "bdev_auto_examine": true, 00:22:19.883 "iobuf_small_cache_size": 128, 00:22:19.883 "iobuf_large_cache_size": 16 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "bdev_raid_set_options", 00:22:19.883 "params": { 00:22:19.883 "process_window_size_kb": 1024 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "bdev_iscsi_set_options", 00:22:19.883 "params": { 00:22:19.883 "timeout_sec": 30 00:22:19.883 } 00:22:19.883 }, 00:22:19.883 { 00:22:19.883 "method": "bdev_nvme_set_options", 00:22:19.883 "params": { 00:22:19.883 "action_on_timeout": "none", 00:22:19.883 "timeout_us": 0, 00:22:19.883 "timeout_admin_us": 0, 00:22:19.883 "keep_alive_timeout_ms": 10000, 00:22:19.883 "arbitration_burst": 0, 00:22:19.883 "low_priority_weight": 0, 00:22:19.883 "medium_priority_weight": 0, 00:22:19.883 "high_priority_weight": 0, 00:22:19.883 "nvme_adminq_poll_period_us": 10000, 00:22:19.883 "nvme_ioq_poll_period_us": 0, 00:22:19.883 "io_queue_requests": 512, 00:22:19.883 "delay_cmd_submit": true, 00:22:19.883 "transport_retry_count": 4, 00:22:19.883 "bdev_retry_count": 3, 00:22:19.883 "transport_ack_timeout": 0, 00:22:19.883 "ctrlr_loss_timeout_sec": 0, 00:22:19.883 "reconnect_delay_sec": 0, 00:22:19.883 "fast_io_fail_timeout_sec": 0, 00:22:19.883 "disable_auto_failback": false, 00:22:19.883 "generate_uuids": false, 00:22:19.883 "transport_tos": 0, 00:22:19.883 "nvme_error_stat": false, 00:22:19.883 "rdma_srq_size": 0, 00:22:19.883 "io_path_stat": false, 00:22:19.883 "allow_accel_sequence": false, 00:22:19.883 "rdma_max_cq_size": 0, 00:22:19.883 "rdma_cm_event_timeout_ms": 0, 00:22:19.883 "dhchap_digests": [ 00:22:19.883 "sha256", 00:22:19.883 "sha384", 00:22:19.883 "sha512" 00:22:19.883 ], 00:22:19.883 "dhchap_dhgroups": [ 00:22:19.883 "null", 00:22:19.884 "ffdhe2048", 00:22:19.884 "ffdhe3072", 00:22:19.884 "ffdhe4096", 00:22:19.884 "ffdhe6144", 00:22:19.884 "ffdhe8192" 00:22:19.884 ] 00:22:19.884 } 00:22:19.884 }, 00:22:19.884 { 00:22:19.884 "method": "bdev_nvme_attach_controller", 00:22:19.884 "params": { 00:22:19.884 "name": "nvme0", 00:22:19.884 "trtype": "TCP", 00:22:19.884 "adrfam": "IPv4", 00:22:19.884 "traddr": "127.0.0.1", 00:22:19.884 "trsvcid": "4420", 00:22:19.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:19.884 "prchk_reftag": false, 00:22:19.884 "prchk_guard": false, 00:22:19.884 "ctrlr_loss_timeout_sec": 0, 00:22:19.884 "reconnect_delay_sec": 0, 00:22:19.884 "fast_io_fail_timeout_sec": 0, 00:22:19.884 "psk": "key0", 00:22:19.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:19.884 "hdgst": false, 00:22:19.884 "ddgst": false 00:22:19.884 } 00:22:19.884 }, 00:22:19.884 { 00:22:19.884 "method": "bdev_nvme_set_hotplug", 00:22:19.884 "params": { 00:22:19.884 "period_us": 100000, 00:22:19.884 "enable": false 00:22:19.884 } 00:22:19.884 }, 00:22:19.884 { 00:22:19.884 "method": "bdev_wait_for_examine" 00:22:19.884 } 00:22:19.884 ] 00:22:19.884 }, 00:22:19.884 { 00:22:19.884 "subsystem": "nbd", 00:22:19.884 "config": [] 00:22:19.884 } 00:22:19.884 ] 00:22:19.884 }' 00:22:19.884 22:48:37 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:19.884 22:48:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.884 22:48:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.884 22:48:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.884 22:48:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.884 22:48:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:20.143 [2024-07-15 22:48:37.728073] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:20.143 [2024-07-15 22:48:37.728165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85727 ] 00:22:20.143 [2024-07-15 22:48:37.858904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.401 [2024-07-15 22:48:37.985799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.401 [2024-07-15 22:48:38.119367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:20.401 [2024-07-15 22:48:38.174218] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.967 22:48:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.967 22:48:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:20.967 22:48:38 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:20.967 22:48:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.967 22:48:38 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:21.226 22:48:38 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:21.226 22:48:38 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:21.226 22:48:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:21.226 22:48:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.226 22:48:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.226 22:48:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.226 22:48:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:21.483 22:48:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:21.483 22:48:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:21.483 22:48:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:21.483 22:48:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.483 22:48:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.483 22:48:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.484 22:48:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:21.740 22:48:39 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:21.740 22:48:39 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:21.740 22:48:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:21.740 22:48:39 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:21.998 22:48:39 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:21.998 22:48:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:21.998 22:48:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jivzkKqT0d /tmp/tmp.dgyXL5Zg8e 00:22:21.998 22:48:39 keyring_file -- keyring/file.sh@20 -- # killprocess 85727 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85727 ']' 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85727 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85727 00:22:21.998 killing process with pid 85727 00:22:21.998 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.998 00:22:21.998 Latency(us) 00:22:21.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.998 =================================================================================================================== 00:22:21.998 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85727' 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@967 -- # kill 85727 00:22:21.998 22:48:39 keyring_file -- common/autotest_common.sh@972 -- # wait 85727 00:22:22.256 22:48:39 keyring_file -- keyring/file.sh@21 -- # killprocess 85465 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85465 ']' 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85465 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85465 00:22:22.256 killing process with pid 85465 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85465' 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@967 -- # kill 85465 00:22:22.256 [2024-07-15 22:48:39.963683] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:22.256 22:48:39 keyring_file -- common/autotest_common.sh@972 -- # wait 85465 00:22:22.823 00:22:22.823 real 0m15.511s 00:22:22.823 user 0m38.473s 00:22:22.823 sys 0m2.974s 00:22:22.823 22:48:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.823 ************************************ 00:22:22.823 END TEST keyring_file 00:22:22.823 ************************************ 00:22:22.823 22:48:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:22.823 22:48:40 -- common/autotest_common.sh@1142 -- # return 0 00:22:22.823 22:48:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:22.823 22:48:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:22.823 22:48:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:22.823 22:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.823 22:48:40 -- common/autotest_common.sh@10 -- # set +x 00:22:22.823 ************************************ 00:22:22.823 START TEST keyring_linux 00:22:22.823 ************************************ 00:22:22.823 22:48:40 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:22.823 * Looking for test storage... 00:22:22.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:22.823 22:48:40 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:22.823 22:48:40 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d591d0cc-2041-4f11-80f5-97d971e06385 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=d591d0cc-2041-4f11-80f5-97d971e06385 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.823 22:48:40 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.823 22:48:40 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.823 22:48:40 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.823 22:48:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.823 22:48:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.823 22:48:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.823 22:48:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:22.823 22:48:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.823 22:48:40 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:22.824 /tmp/:spdk-test:key0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:22.824 22:48:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:22.824 22:48:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:22.824 /tmp/:spdk-test:key1 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85845 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.824 22:48:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85845 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85845 ']' 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.824 22:48:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:23.082 [2024-07-15 22:48:40.678501] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:23.082 [2024-07-15 22:48:40.678832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85845 ] 00:22:23.082 [2024-07-15 22:48:40.817117] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.353 [2024-07-15 22:48:40.925791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.353 [2024-07-15 22:48:40.978496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:23.941 [2024-07-15 22:48:41.590603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.941 null0 00:22:23.941 [2024-07-15 22:48:41.622550] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.941 [2024-07-15 22:48:41.622788] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:23.941 274364393 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:23.941 15003575 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85862 00:22:23.941 22:48:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85862 /var/tmp/bperf.sock 00:22:23.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85862 ']' 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.941 22:48:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:23.941 [2024-07-15 22:48:41.695681] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:23.941 [2024-07-15 22:48:41.695917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85862 ] 00:22:24.200 [2024-07-15 22:48:41.828453] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.200 [2024-07-15 22:48:41.936121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.134 22:48:42 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.134 22:48:42 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:25.134 22:48:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:25.134 22:48:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:25.134 22:48:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:25.134 22:48:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.392 [2024-07-15 22:48:43.189977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:25.648 22:48:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:25.648 22:48:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:25.648 [2024-07-15 22:48:43.458465] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.906 nvme0n1 00:22:25.906 22:48:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:25.906 22:48:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:25.906 22:48:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:25.906 22:48:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:25.906 22:48:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:25.906 22:48:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.164 22:48:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:26.164 22:48:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:26.164 22:48:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:26.164 22:48:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:26.164 22:48:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:26.164 22:48:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.164 22:48:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@25 -- # sn=274364393 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 274364393 == \2\7\4\3\6\4\3\9\3 ]] 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 274364393 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:26.422 22:48:44 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:26.422 Running I/O for 1 seconds... 00:22:27.372 00:22:27.372 Latency(us) 00:22:27.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.372 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:27.372 nvme0n1 : 1.01 12964.42 50.64 0.00 0.00 9814.94 3157.64 13107.20 00:22:27.372 =================================================================================================================== 00:22:27.372 Total : 12964.42 50.64 0.00 0.00 9814.94 3157.64 13107.20 00:22:27.372 0 00:22:27.372 22:48:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:27.372 22:48:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:27.937 22:48:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:27.937 22:48:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:27.937 22:48:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:27.937 22:48:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:27.937 22:48:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:27.937 22:48:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.195 22:48:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:28.195 22:48:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:28.195 22:48:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:28.195 22:48:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.195 22:48:45 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:28.195 22:48:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:28.453 [2024-07-15 22:48:46.046617] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-15 22:48:46.046616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac45f0 (107): Transport endpoint is not connected 00:22:28.453 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.453 [2024-07-15 22:48:46.047589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac45f0 (9): Bad file descriptor 00:22:28.453 [2024-07-15 22:48:46.048586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.453 [2024-07-15 22:48:46.048607] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:28.453 [2024-07-15 22:48:46.048634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.453 request: 00:22:28.453 { 00:22:28.453 "name": "nvme0", 00:22:28.453 "trtype": "tcp", 00:22:28.453 "traddr": "127.0.0.1", 00:22:28.453 "adrfam": "ipv4", 00:22:28.453 "trsvcid": "4420", 00:22:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:28.453 "prchk_reftag": false, 00:22:28.453 "prchk_guard": false, 00:22:28.453 "hdgst": false, 00:22:28.453 "ddgst": false, 00:22:28.453 "psk": ":spdk-test:key1", 00:22:28.453 "method": "bdev_nvme_attach_controller", 00:22:28.453 "req_id": 1 00:22:28.453 } 00:22:28.453 Got JSON-RPC error response 00:22:28.453 response: 00:22:28.453 { 00:22:28.453 "code": -5, 00:22:28.453 "message": "Input/output error" 00:22:28.453 } 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@33 -- # sn=274364393 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 274364393 00:22:28.453 1 links removed 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@33 -- # sn=15003575 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 15003575 00:22:28.453 1 links removed 00:22:28.453 22:48:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85862 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85862 ']' 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85862 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85862 00:22:28.453 killing process with pid 85862 00:22:28.453 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.453 00:22:28.453 Latency(us) 00:22:28.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.453 =================================================================================================================== 00:22:28.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85862' 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@967 -- # kill 85862 00:22:28.453 22:48:46 keyring_linux -- common/autotest_common.sh@972 -- # wait 85862 00:22:28.711 22:48:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85845 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85845 ']' 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85845 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85845 00:22:28.711 killing process with pid 85845 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85845' 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@967 -- # kill 85845 00:22:28.711 22:48:46 keyring_linux -- common/autotest_common.sh@972 -- # wait 85845 00:22:28.969 00:22:28.969 real 0m6.345s 00:22:28.969 user 0m12.368s 00:22:28.969 sys 0m1.562s 00:22:28.969 ************************************ 00:22:28.969 END TEST keyring_linux 00:22:28.969 ************************************ 00:22:28.969 22:48:46 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:28.969 22:48:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:28.969 22:48:46 -- common/autotest_common.sh@1142 -- # return 0 00:22:28.969 22:48:46 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:28.969 22:48:46 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:28.969 22:48:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:28.969 22:48:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:28.969 22:48:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:28.969 22:48:46 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:28.969 22:48:46 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:28.969 22:48:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.226 22:48:46 -- common/autotest_common.sh@10 -- # set +x 00:22:29.226 22:48:46 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:29.226 22:48:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:29.226 22:48:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:29.226 22:48:46 -- common/autotest_common.sh@10 -- # set +x 00:22:30.592 INFO: APP EXITING 00:22:30.592 INFO: killing all VMs 00:22:30.592 INFO: killing vhost app 00:22:30.592 INFO: EXIT DONE 00:22:31.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:31.464 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:31.464 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:32.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:32.031 Cleaning 00:22:32.031 Removing: /var/run/dpdk/spdk0/config 00:22:32.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:32.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:32.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:32.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:32.290 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:32.290 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:32.290 Removing: /var/run/dpdk/spdk1/config 00:22:32.290 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:32.290 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:32.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:32.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:32.291 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:32.291 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:32.291 Removing: /var/run/dpdk/spdk2/config 00:22:32.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:32.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:32.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:32.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:32.291 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:32.291 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:32.291 Removing: /var/run/dpdk/spdk3/config 00:22:32.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:32.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:32.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:32.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:32.291 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:32.291 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:32.291 Removing: /var/run/dpdk/spdk4/config 00:22:32.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:32.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:32.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:32.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:32.291 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:32.291 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:32.291 Removing: /dev/shm/nvmf_trace.0 00:22:32.291 Removing: /dev/shm/spdk_tgt_trace.pid58686 00:22:32.291 Removing: /var/run/dpdk/spdk0 00:22:32.291 Removing: /var/run/dpdk/spdk1 00:22:32.291 Removing: /var/run/dpdk/spdk2 00:22:32.291 Removing: /var/run/dpdk/spdk3 00:22:32.291 Removing: /var/run/dpdk/spdk4 00:22:32.291 Removing: /var/run/dpdk/spdk_pid58536 00:22:32.291 Removing: /var/run/dpdk/spdk_pid58686 00:22:32.291 Removing: /var/run/dpdk/spdk_pid58879 00:22:32.291 Removing: /var/run/dpdk/spdk_pid58971 00:22:32.291 Removing: /var/run/dpdk/spdk_pid58993 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59108 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59126 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59244 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59435 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59575 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59645 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59721 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59812 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59884 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59922 00:22:32.291 Removing: /var/run/dpdk/spdk_pid59958 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60019 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60119 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60557 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60609 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60660 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60676 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60754 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60770 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60837 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60853 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60904 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60922 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60962 00:22:32.291 Removing: /var/run/dpdk/spdk_pid60980 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61108 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61144 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61218 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61272 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61300 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61365 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61398 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61434 00:22:32.291 Removing: /var/run/dpdk/spdk_pid61474 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61503 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61543 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61572 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61612 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61652 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61681 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61721 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61758 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61792 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61833 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61862 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61902 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61937 00:22:32.551 Removing: /var/run/dpdk/spdk_pid61972 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62015 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62044 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62086 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62156 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62249 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62557 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62569 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62611 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62619 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62640 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62670 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62678 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62699 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62718 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62737 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62758 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62783 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62796 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62817 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62842 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62860 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62876 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62901 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62914 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62935 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62966 00:22:32.551 Removing: /var/run/dpdk/spdk_pid62985 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63020 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63078 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63107 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63122 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63150 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63160 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63173 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63210 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63229 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63263 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63267 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63282 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63286 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63301 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63316 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63320 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63335 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63359 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63390 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63405 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63429 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63443 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63451 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63491 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63508 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63540 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63548 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63561 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63568 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63581 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63589 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63602 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63609 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63683 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63731 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63842 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63870 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63915 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63935 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63957 00:22:32.551 Removing: /var/run/dpdk/spdk_pid63977 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64014 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64030 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64094 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64121 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64165 00:22:32.551 Removing: /var/run/dpdk/spdk_pid64236 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64315 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64344 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64436 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64484 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64511 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64735 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64833 00:22:32.811 Removing: /var/run/dpdk/spdk_pid64861 00:22:32.811 Removing: /var/run/dpdk/spdk_pid65179 00:22:32.811 Removing: /var/run/dpdk/spdk_pid65217 00:22:32.811 Removing: /var/run/dpdk/spdk_pid65502 00:22:32.811 Removing: /var/run/dpdk/spdk_pid65905 00:22:32.811 Removing: /var/run/dpdk/spdk_pid66177 00:22:32.811 Removing: /var/run/dpdk/spdk_pid66971 00:22:32.811 Removing: /var/run/dpdk/spdk_pid67783 00:22:32.811 Removing: /var/run/dpdk/spdk_pid67905 00:22:32.811 Removing: /var/run/dpdk/spdk_pid67977 00:22:32.811 Removing: /var/run/dpdk/spdk_pid69237 00:22:32.811 Removing: /var/run/dpdk/spdk_pid69445 00:22:32.811 Removing: /var/run/dpdk/spdk_pid72858 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73175 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73284 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73413 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73441 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73463 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73495 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73587 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73723 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73873 00:22:32.811 Removing: /var/run/dpdk/spdk_pid73954 00:22:32.811 Removing: /var/run/dpdk/spdk_pid74147 00:22:32.811 Removing: /var/run/dpdk/spdk_pid74231 00:22:32.811 Removing: /var/run/dpdk/spdk_pid74329 00:22:32.811 Removing: /var/run/dpdk/spdk_pid74635 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75022 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75024 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75306 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75326 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75340 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75371 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75380 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75680 00:22:32.811 Removing: /var/run/dpdk/spdk_pid75724 00:22:32.811 Removing: /var/run/dpdk/spdk_pid76006 00:22:32.811 Removing: /var/run/dpdk/spdk_pid76199 00:22:32.811 Removing: /var/run/dpdk/spdk_pid76591 00:22:32.811 Removing: /var/run/dpdk/spdk_pid77107 00:22:32.811 Removing: /var/run/dpdk/spdk_pid77923 00:22:32.811 Removing: /var/run/dpdk/spdk_pid78505 00:22:32.811 Removing: /var/run/dpdk/spdk_pid78507 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80418 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80484 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80543 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80599 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80720 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80773 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80833 00:22:32.811 Removing: /var/run/dpdk/spdk_pid80888 00:22:32.811 Removing: /var/run/dpdk/spdk_pid81217 00:22:32.811 Removing: /var/run/dpdk/spdk_pid82373 00:22:32.811 Removing: /var/run/dpdk/spdk_pid82513 00:22:32.811 Removing: /var/run/dpdk/spdk_pid82757 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83303 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83466 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83623 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83716 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83877 00:22:32.811 Removing: /var/run/dpdk/spdk_pid83991 00:22:32.811 Removing: /var/run/dpdk/spdk_pid84649 00:22:32.811 Removing: /var/run/dpdk/spdk_pid84684 00:22:32.811 Removing: /var/run/dpdk/spdk_pid84720 00:22:32.811 Removing: /var/run/dpdk/spdk_pid84973 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85008 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85038 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85465 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85482 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85727 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85845 00:22:32.811 Removing: /var/run/dpdk/spdk_pid85862 00:22:32.811 Clean 00:22:33.070 22:48:50 -- common/autotest_common.sh@1451 -- # return 0 00:22:33.070 22:48:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:33.070 22:48:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.070 22:48:50 -- common/autotest_common.sh@10 -- # set +x 00:22:33.070 22:48:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:33.070 22:48:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.070 22:48:50 -- common/autotest_common.sh@10 -- # set +x 00:22:33.070 22:48:50 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:33.070 22:48:50 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:33.070 22:48:50 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:33.070 22:48:50 -- spdk/autotest.sh@391 -- # hash lcov 00:22:33.070 22:48:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:33.070 22:48:50 -- spdk/autotest.sh@393 -- # hostname 00:22:33.070 22:48:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:33.329 geninfo: WARNING: invalid characters removed from testname! 00:22:59.906 22:49:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:03.197 22:49:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:05.727 22:49:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:09.011 22:49:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:11.542 22:49:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:14.096 22:49:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:17.376 22:49:34 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:17.376 22:49:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:17.376 22:49:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:17.376 22:49:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.376 22:49:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.376 22:49:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.376 22:49:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.376 22:49:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.376 22:49:34 -- paths/export.sh@5 -- $ export PATH 00:23:17.376 22:49:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.376 22:49:34 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:17.376 22:49:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:23:17.376 22:49:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721083774.XXXXXX 00:23:17.376 22:49:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721083774.kqzekH 00:23:17.376 22:49:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:23:17.376 22:49:34 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:23:17.376 22:49:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:17.376 22:49:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:17.376 22:49:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:17.376 22:49:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:23:17.376 22:49:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:17.376 22:49:34 -- common/autotest_common.sh@10 -- $ set +x 00:23:17.376 22:49:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:17.376 22:49:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:23:17.376 22:49:34 -- pm/common@17 -- $ local monitor 00:23:17.376 22:49:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.376 22:49:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.376 22:49:34 -- pm/common@25 -- $ sleep 1 00:23:17.376 22:49:34 -- pm/common@21 -- $ date +%s 00:23:17.376 22:49:34 -- pm/common@21 -- $ date +%s 00:23:17.376 22:49:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721083774 00:23:17.376 22:49:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721083774 00:23:17.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721083774_collect-cpu-load.pm.log 00:23:17.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721083774_collect-vmstat.pm.log 00:23:17.942 22:49:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:23:17.942 22:49:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:17.942 22:49:35 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:17.942 22:49:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:17.942 22:49:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:17.942 22:49:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:17.942 22:49:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:17.942 22:49:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:17.942 22:49:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:17.942 22:49:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:17.942 22:49:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:17.942 22:49:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:17.942 22:49:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:17.942 22:49:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.942 22:49:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:17.942 22:49:35 -- pm/common@44 -- $ pid=87587 00:23:17.942 22:49:35 -- pm/common@50 -- $ kill -TERM 87587 00:23:17.942 22:49:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.942 22:49:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:17.942 22:49:35 -- pm/common@44 -- $ pid=87589 00:23:17.942 22:49:35 -- pm/common@50 -- $ kill -TERM 87589 00:23:17.942 + [[ -n 5100 ]] 00:23:17.942 + sudo kill 5100 00:23:17.952 [Pipeline] } 00:23:17.970 [Pipeline] // timeout 00:23:17.976 [Pipeline] } 00:23:17.993 [Pipeline] // stage 00:23:17.999 [Pipeline] } 00:23:18.016 [Pipeline] // catchError 00:23:18.024 [Pipeline] stage 00:23:18.026 [Pipeline] { (Stop VM) 00:23:18.038 [Pipeline] sh 00:23:18.311 + vagrant halt 00:23:22.497 ==> default: Halting domain... 00:23:27.775 [Pipeline] sh 00:23:28.053 + vagrant destroy -f 00:23:31.338 ==> default: Removing domain... 00:23:31.916 [Pipeline] sh 00:23:32.193 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:32.202 [Pipeline] } 00:23:32.221 [Pipeline] // stage 00:23:32.225 [Pipeline] } 00:23:32.243 [Pipeline] // dir 00:23:32.249 [Pipeline] } 00:23:32.266 [Pipeline] // wrap 00:23:32.272 [Pipeline] } 00:23:32.285 [Pipeline] // catchError 00:23:32.294 [Pipeline] stage 00:23:32.296 [Pipeline] { (Epilogue) 00:23:32.309 [Pipeline] sh 00:23:32.588 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:39.159 [Pipeline] catchError 00:23:39.161 [Pipeline] { 00:23:39.175 [Pipeline] sh 00:23:39.454 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:39.454 Artifacts sizes are good 00:23:39.462 [Pipeline] } 00:23:39.473 [Pipeline] // catchError 00:23:39.481 [Pipeline] archiveArtifacts 00:23:39.487 Archiving artifacts 00:23:39.649 [Pipeline] cleanWs 00:23:39.658 [WS-CLEANUP] Deleting project workspace... 00:23:39.658 [WS-CLEANUP] Deferred wipeout is used... 00:23:39.663 [WS-CLEANUP] done 00:23:39.665 [Pipeline] } 00:23:39.679 [Pipeline] // stage 00:23:39.684 [Pipeline] } 00:23:39.700 [Pipeline] // node 00:23:39.705 [Pipeline] End of Pipeline 00:23:39.875 Finished: SUCCESS